modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-05 00:41:53
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-05 00:41:31
card
stringlengths
11
1.01M
AmeerH/fyn-flight-v3.0
AmeerH
2024-09-09T18:11:40Z
5
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:AmeerH/FPT-Base", "base_model:adapter:AmeerH/FPT-Base", "region:us" ]
null
2024-09-09T15:02:27Z
--- base_model: AmeerH/FPT-Base library_name: peft tags: - generated_from_trainer model-index: - name: fyn-flight-v3.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fyn-flight-v3.0 This model is a fine-tuned version of [AmeerH/FPT-Base](https://huggingface.co/AmeerH/FPT-Base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1517 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.9609 | 0.0503 | 100 | 1.4960 | | 1.8286 | 0.1007 | 200 | 1.1963 | | 1.5055 | 0.1510 | 300 | 1.1517 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
warisqr7/accent-id-commonaccent_xlsr-en-english
warisqr7
2024-09-09T18:09:46Z
235
0
speechbrain
[ "speechbrain", "audio-classification", "embeddings", "Accent Identification", "pytorch", "wav2vec2", "XLSR", "CommonAccent", "English", "en", "dataset:CommonVoice", "arxiv:2305.18283", "arxiv:2006.13979", "arxiv:2106.04624", "license:mit", "region:us" ]
audio-classification
2024-08-11T22:16:23Z
--- language: - en thumbnail: null tags: - audio-classification - speechbrain - embeddings - Accent Identification - pytorch - wav2vec2 - XLSR - CommonAccent - English license: mit datasets: - CommonVoice metrics: - Accuracy widget: - example_title: USA src: >- https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/us_1.wav - example_title: Scotland src: >- https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/scotland_1.wav - example_title: Malaysia src: >- https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/malaysia_1.wav - example_title: Philippines src: >- https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/philippines_1.wav --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # CommonAccent: Exploring Large Acoustic Pretrained Models for Accent Classification Based on CommonVoice (Copied from Jzuluaga/accent-id-commonaccent_xlsr-en-english and added some fixes) **English Accent Classifier with XLSR model** **Abstract**: Despite the recent advancements in Automatic Speech Recognition (ASR), the recognition of accented speech still remains a dominant problem. In order to create more inclusive ASR systems, research has shown that the integration of accent information, as part of a larger ASR framework, can lead to the mitigation of accented speech errors. We address multilingual accent classification through the ECAPA-TDNN and Wav2Vec 2.0/XLSR architectures which have been proven to perform well on a variety of speech-related downstream tasks. We introduce a simple-to-follow recipe aligned to the SpeechBrain toolkit for accent classification based on Common Voice 7.0 (English) and Common Voice 11.0 (Italian, German, and Spanish). Furthermore, we establish new state-of-the-art for English accent classification with as high as 95% accuracy. We also study the internal categorization of the Wav2Vev 2.0 embeddings through t-SNE, noting that there is a level of clustering based on phonological similarity. This repository provides all the necessary tools to perform accent identification from speech recordings with [SpeechBrain](https://github.com/speechbrain/speechbrain). The system uses a model pretrained on the CommonAccent dataset in English (16 accents). This system is based on the CommonLanguage Recipe located here: https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage The provided system can recognize the following 16 accents from short speech recordings in English (EN): ``` - us - england - australia - indian - canada - bermuda - scotland - african - ireland - newzealand - wales - malaysia - philippines - singapore - hongkong - southatlandtic ``` <a href="https://github.com/JuanPZuluaga/accent-recog-slt2022"> <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green"> </a> Github repository link: https://github.com/JuanPZuluaga/accent-recog-slt2022 **NOTE**: due to incompatibility with the model and the current SpeechBrain interfaces, we cannot offer the Inference API. Please, follow the steps in **"Perform Accent Identification from Speech Recordings"** to use this Italian Accent ID model. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). ## Pipeline description This system is composed of a fine-tuned XLSR model coupled with statistical pooling. A classifier, trained with NLL Loss, is applied on top of that. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform Accent Identification from Speech Recordings ```python import torchaudio from speechbrain.inference.interfaces import foreign_class classifier = foreign_class(source="warisqr7/accent-id-commonaccent_xlsr-en-english", pymodule_file="custom_interface.py", classname="CustomEncoderWav2vec2Classifier") # US Accent Example out_prob, score, index, text_lab = classifier.classify_file('warisqr7/accent-id-commonaccent_xlsr-en-english/data/us.wav') print(text_lab) # Philippines Example out_prob, score, index, text_lab = classifier.classify_file('warisqr7/accent-id-commonaccent_xlsr-en-english/data/philippines.wav') print(text_lab) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Clone our repository in https://github.com/JuanPZuluaga/accent-recog-slt2022: ```bash git clone https://github.com/JuanPZuluaga/accent-recog-slt2022 cd CommonAccent/accent_id python train_w2v2.py hparams/train_w2v2.yaml ``` You can find our training results (models, logs, etc) in this repository's `Files and versions` page. ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Cite our work: CommonAccent If you find useful this work, please cite our work as: ``` @article{zuluaga2023commonaccent, title={CommonAccent: Exploring Large Acoustic Pretrained Models for Accent Classification Based on Common Voice}, author={Zuluaga-Gomez, Juan and Ahmed, Sara and Visockas, Danielius and Subakan, Cem}, journal={Interspeech 2023}, url={https://arxiv.org/abs/2305.18283}, year={2023} } ``` #### Cite XLSR model ```@article{conneau2020unsupervised, title={Unsupervised cross-lingual representation learning for speech recognition}, author={Conneau, Alexis and Baevski, Alexei and Collobert, Ronan and Mohamed, Abdelrahman and Auli, Michael}, journal={arXiv preprint arXiv:2006.13979}, year={2020} } ``` # **Cite SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
Josephgflowers/TinyLlama_v1.1_math_code-world-test-1
Josephgflowers
2024-09-09T18:06:41Z
123
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-23T12:15:28Z
--- license: mit --- This model is trained off of Josephgflowers/TinyLlama-v1.1-Agent-Rag-Nerd-v1 which is trained off of the tinyllama v1.1. This model was trained using the tinyllama chat format, zephyr. I trained this model using a vast amount of textbooks are all you need style synthetic datasets as well as some common chat datasets and the Cinder dataset.
Riyuechang/Breeze-7B-PTT-Chat-v1_lora
Riyuechang
2024-09-09T18:02:23Z
9
0
peft
[ "peft", "safetensors", "PTT", "PTT_Chat", "text-generation", "dataset:Riyuechang/PTT-Corpus-100K_Gossiping-1400-39400", "base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0", "base_model:adapter:MediaTek-Research/Breeze-7B-Instruct-v1_0", "license:apache-2.0", "region:us" ]
text-generation
2024-09-02T13:35:12Z
--- license: apache-2.0 datasets: - Riyuechang/PTT-Corpus-100K_Gossiping-1400-39400 base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0 pipeline_tag: text-generation library_name: peft tags: - PTT - PTT_Chat --- # 簡介 [Riyuechang/Breeze-7B-PTT-Chat-v1](https://huggingface.co/Riyuechang/Breeze-7B-PTT-Chat-v1)所使用的,未與主模型[MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0)合併的lora模型 # 注意!! 此Lora模型有使用Dora技術,Dora能讓模型有更好的學習效率 代價就是會讓訓練和推理花費的時間大幅上升,尤其是推理的速度會非常慢 建議把此Lora模型跟主模型合併後在進行推理 # 設備 - Ubuntu 22.04.4 LTS - NVIDIA GeForce RTX 3060 12G # Lora參數 ```python r=8, lora_alpha=32, lora_dropout=0.1, task_type="CAUSAL_LM", target_modules="all-linear", bias="none", use_dora=True, use_rslora=True ``` # 訓練參數 ```python per_device_train_batch_size=28, gradient_accumulation_steps=1, num_train_epochs=3, warmup_ratio=0.1, learning_rate=2e-5, bf16=True, save_strategy="steps", save_steps=500, save_total_limit=10, logging_steps=10, output_dir=log_output, optim="paged_adamw_8bit", gradient_checkpointing=True ``` # 結果 - loss: 1.1035
martimfasantos/dpo-hinge-xcomet_xl_xxl-10p-shuff-1e-7-full-from-sft-gemma
martimfasantos
2024-09-09T17:57:46Z
5
0
null
[ "safetensors", "gemma2", "xcomet_xl_xxl", "generated_from_trainer", "base_model:martimfasantos/sft-xcomet_xl_xxl-chosen-10lp-shuff-full-gemma", "base_model:finetune:martimfasantos/sft-xcomet_xl_xxl-chosen-10lp-shuff-full-gemma", "license:gemma", "region:us" ]
null
2024-09-09T17:55:10Z
--- license: gemma base_model: martimfasantos/sft-xcomet_xl_xxl-chosen-10lp-shuff-full-gemma tags: - xcomet_xl_xxl - generated_from_trainer model-index: - name: dpo-hinge-xcomet_xl_xxl-10p-shuff-1e-7-full-from-sft-gemma results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo-hinge-xcomet_xl_xxl-10p-shuff-1e-7-full-from-sft-gemma This model is a fine-tuned version of [martimfasantos/sft-xcomet_xl_xxl-chosen-10lp-shuff-full-gemma](https://huggingface.co/martimfasantos/sft-xcomet_xl_xxl-chosen-10lp-shuff-full-gemma) on the Unbabel/TowerAligned-v0.1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
wadhma/Critique-L2-FT-DCR
wadhma
2024-09-09T17:55:48Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:wadhma/dcr_data", "arxiv:2407.02397", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-07-01T18:03:24Z
--- library_name: transformers license: mit datasets: - wadhma/dcr_data language: - en base_model: meta-llama/Llama-2-7b-chat-hf pipeline_tag: text-generation --- Given a document and a factually inconsistent claim, this model generates an explanation for why the claim is inconsistent with the document along with fine-grained span with the inconsistency. Repository: https://github.com/ManyaWadhwa/DCR Paper: https://arxiv.org/pdf/2407.02397
async0x42/Pantheon-RP-1.6.1-12b-Nemo-exl2_8bpw
async0x42
2024-09-09T17:55:47Z
8
0
null
[ "safetensors", "mistral", "instruct", "finetune", "chatml", "axolotl", "roleplay", "en", "base_model:mistralai/Mistral-Nemo-Base-2407", "base_model:quantized:mistralai/Mistral-Nemo-Base-2407", "license:apache-2.0", "8-bit", "exl2", "region:us" ]
null
2024-09-09T17:50:23Z
--- base_model: mistralai/Mistral-Nemo-Base-2407 tags: - instruct - finetune - chatml - axolotl - roleplay license: apache-2.0 language: - en --- ![image/png](Pantheon.png) # Pantheon-RP-1.6.1-12b-Nemo-KTO Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase. Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well. **Changes in version 1.6.1:** A minor update, albeit with a couple notable changes: - Found some issues in my 1.6 datasets, which I cleaned up - Alternative Pantheon dialogue set, combined with some further rewriting - This version was trained with 8k context, which will hopefully be beneficial for longer conversations At this moment of writing I'm still conducting KTO experiments to see if I can produce a model that's even better. If not succesful, I'll instead be focusing on a 1.7 release. Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-1.6.1-12b-Nemo-GGUF) Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between. ## Model details I used a multi-stage finetuning process as Mistral Nemo was proving to be somewhat stubborn without a solid base training being performed first; - The first finetune was remade to now train on almost the entirety of my [Deduped Sonnet 3.5 SlimOrca dataset](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned), minus the ELI5 system prompts. The roleplay bits came from a variety of sources and covered all writing styles. - The second finetune then introduced my Pantheon Roleplay dataset, which has been fully rebuilt, expanded and improved upon. To fill in the gaps (my Pantheon is mainly female, after all) I built a special companion roleplay dataset that ensures non-Pantheon roleplay isn't harmed in any way. The ratio is currently 33/66, with 33 belonging to the personas. Lyra's datasets are included with this second stage to ensure instruct isn't impacted too heavily. **TLDR;** Download. ChatML prompt format. Have fun! Leave feedback! ## Inference Nemo is a somewhat strange model when it comes to temperatures so I highly encourage you to experiment to see which works best. Here's my current preset: ``` "temperature": 0.8, "repetition_penalty": 1.05, "min_p": 0.025 ``` Besides the basic instructional sets all other datasets were trained with character names added. Enable this at all times for an optimal experience. ## General Roleplay Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset. There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts. ## Lyra the Assistant **System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.` Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance. She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha). ## Pantheon Personas The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well! As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization. To give the persona something to work with I suggest you also add the following two items to it; ``` Regarding the user: (Name, appearance, etc) Location: (Where are you two? What are you doing?) ``` The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence. **Note:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0! ## New personas Switching to a 12B model allowed me to add to the Pantheon without harming the performance of the other personas. **Note:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training. ### **Persona:** Clover **System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.` **Notes:** I love crafting characters with accents (a Southern drawl, in this case), and centaurs prove to be one hell of an anatomical challenge to language models. ### **Persona:** Raza **System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.` **Notes:** Clever raptor girl. Do I really need to say more about this one? The Pantheon was lacking in 'overly intelligent' archetypes. ### **Persona:** Stella Sabre **System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.` **Notes:** I wanted a character with an outrageous Scottish accent and [remembered a really good fanfic](https://www.fimfiction.net/story/334216/1/my-best-friend-stella) I read a couple years ago. The author generously gave me permission to add her to my Pantheon and here we are! ## From the previous release ### **Persona:** Aiva **System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.` ### **Persona:** Haru **System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.` ### **Persona:** Kyra **System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.` ### **Persona:** Nyaa **System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.` ### **Persona:** Nyx **System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.` ### **Persona:** Sera **System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.` ### **Persona:** Tiamat **System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.` ### **Persona:** Tsune **System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.` ### **Persona:** Xala **System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.` ## Prompt Format ChatML is the way to go, as always! ``` <|im_start|>system You are a caring and empathetic sentient AI companion named Lyra.<|im_end|> <|im_start|>user Gryphe: Good day, Lyra.<|im_end|> <|im_start|>assistant Lyra: ``` ## What's nest? I have the following improvements on my todo list; - Even more dialogue variety - Group chats ## Credits - Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys! - Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible! - All the folks I chat with on a daily basis on Discord! You know who you are. - Anyone I forgot to mention, just in case! ## Finally If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
wadhma/Refine-L2-FT-DCR
wadhma
2024-09-09T17:55:03Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:wadhma/dcr_data", "arxiv:2407.02397", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-07-01T18:21:45Z
--- library_name: transformers license: mit datasets: - wadhma/dcr_data language: - en base_model: meta-llama/Llama-2-7b-chat-hf pipeline_tag: text-generation --- Given a document and a factually inconsistent summary and a natural language feedback, this model generates a minimally edited refinement based on the feedback. Repository: https://github.com/ManyaWadhwa/DCR Paper: https://arxiv.org/pdf/2407.02397
Rodrigo1771/bioBIT-drugtemist-it-fasttext-85-ner
Rodrigo1771
2024-09-09T17:54:14Z
120
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:Rodrigo1771/drugtemist-it-fasttext-85-ner", "base_model:IVN-RIN/bioBIT", "base_model:finetune:IVN-RIN/bioBIT", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-09-09T17:11:54Z
--- library_name: transformers base_model: IVN-RIN/bioBIT tags: - token-classification - generated_from_trainer datasets: - Rodrigo1771/drugtemist-it-fasttext-85-ner metrics: - precision - recall - f1 - accuracy model-index: - name: output results: - task: name: Token Classification type: token-classification dataset: name: Rodrigo1771/drugtemist-it-fasttext-85-ner type: Rodrigo1771/drugtemist-it-fasttext-85-ner config: DrugTEMIST Italian NER split: validation args: DrugTEMIST Italian NER metrics: - name: Precision type: precision value: 0.9211538461538461 - name: Recall type: recall value: 0.9273959341723137 - name: F1 type: f1 value: 0.9242643511818619 - name: Accuracy type: accuracy value: 0.9986302259153467 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [IVN-RIN/bioBIT](https://huggingface.co/IVN-RIN/bioBIT) on the Rodrigo1771/drugtemist-it-fasttext-85-ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0080 - Precision: 0.9212 - Recall: 0.9274 - F1: 0.9243 - Accuracy: 0.9986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.9989 | 451 | 0.0051 | 0.9326 | 0.8703 | 0.9004 | 0.9984 | | 0.0116 | 2.0 | 903 | 0.0049 | 0.9066 | 0.9206 | 0.9135 | 0.9985 | | 0.0034 | 2.9989 | 1354 | 0.0056 | 0.8990 | 0.9216 | 0.9101 | 0.9984 | | 0.0018 | 4.0 | 1806 | 0.0066 | 0.9094 | 0.9235 | 0.9164 | 0.9985 | | 0.0011 | 4.9989 | 2257 | 0.0056 | 0.9082 | 0.9293 | 0.9187 | 0.9986 | | 0.0007 | 6.0 | 2709 | 0.0068 | 0.9145 | 0.9109 | 0.9127 | 0.9985 | | 0.0005 | 6.9989 | 3160 | 0.0076 | 0.8880 | 0.9284 | 0.9077 | 0.9984 | | 0.0003 | 8.0 | 3612 | 0.0080 | 0.9094 | 0.9235 | 0.9164 | 0.9986 | | 0.0002 | 8.9989 | 4063 | 0.0078 | 0.9162 | 0.9206 | 0.9184 | 0.9986 | | 0.0001 | 9.9889 | 4510 | 0.0080 | 0.9212 | 0.9274 | 0.9243 | 0.9986 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
wadhma/Critique-L3-FT-DCR
wadhma
2024-09-09T17:52:35Z
15
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:wadhma/dcr_data", "arxiv:2407.02397", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-07-01T04:45:55Z
--- library_name: transformers license: mit datasets: - wadhma/dcr_data language: - en base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID ## Model Details Given a document and a factually inconsistent claim, this model generates an explanation for why the claim is inconsistent with the document along with fine-grained span with the inconsistency. ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/ManyaWadhwa/DCR - **Paper:** https://arxiv.org/pdf/2407.02397 <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
Jacaranda/whisper-small-sw
Jacaranda
2024-09-09T17:50:01Z
90
1
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "sw", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-08T20:01:43Z
--- library_name: transformers language: - sw widget: - example_title: speech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: speech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 metrics: - wer model-index: - name: Whisper Small SW-eolang results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 17 type: mozilla-foundation/common_voice_17_0 config: sw split: test args: sw metrics: - name: Wer type: wer value: 27.951115548558043 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small SW-eolang This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17 dataset. It achieves the following results on the evaluation set: - Loss: 0.5136 - Wer Ortho: 36.8520 - Wer: 27.9511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.4894 | 0.1721 | 500 | 0.7495 | 47.1590 | 39.6183 | | 0.4068 | 0.3441 | 1000 | 0.6356 | 44.4535 | 36.3763 | | 0.4137 | 0.5162 | 1500 | 0.5934 | 41.9094 | 33.4866 | | 0.3759 | 0.6882 | 2000 | 0.5590 | 41.4031 | 33.1765 | | 0.38 | 0.8603 | 2500 | 0.5293 | 37.2958 | 28.8699 | | 0.2027 | 1.0323 | 3000 | 0.5235 | 37.4755 | 29.0340 | | 0.2089 | 1.2044 | 3500 | 0.5149 | 35.8239 | 27.4845 | | 0.2282 | 1.3765 | 4000 | 0.5136 | 36.8520 | 27.9511 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.0 - Datasets 2.21.0 - Tokenizers 0.19.1
Dansimons254/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo
Dansimons254
2024-09-09T17:26:06Z
6
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-09-09T17:14:41Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** Dansimons254 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/LLama-3.1-Hyper-Stock-GGUF
mradermacher
2024-09-09T17:22:49Z
7
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:bunnycore/LLama-3.1-Hyper-Stock", "base_model:quantized:bunnycore/LLama-3.1-Hyper-Stock", "endpoints_compatible", "region:us" ]
null
2024-09-09T16:38:28Z
--- base_model: bunnycore/LLama-3.1-Hyper-Stock language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/bunnycore/LLama-3.1-Hyper-Stock <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LLama-3.1-Hyper-Stock-GGUF/resolve/main/LLama-3.1-Hyper-Stock.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
zh-h/my-classification-model
zh-h
2024-09-09T17:20:32Z
107
1
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T13:44:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zaidzameer010/flux-1-zaid
zaidzameer010
2024-09-09T17:17:12Z
8
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-09T14:48:16Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: A person in a bustling cafe zaid output: url: samples/1725887011384__000002000_0.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: zaid license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # flux-1-zaid Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `zaid` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/zaidzameer010/flux-1-zaid/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('zaidzameer010/flux-1-zaid', weight_name='flux-1-zaid') image = pipeline('A person in a bustling cafe zaid').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Mawiwawi/distilbert-base-uncased-finetuned-ner
Mawiwawi
2024-09-09T17:11:45Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-17T16:34:59Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2988 - Precision: 0.8066 - Recall: 0.7644 - F1: 0.7849 - Accuracy: 0.9432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 13 | 1.7619 | 0.0 | 0.0 | 0.0 | 0.6185 | | No log | 2.0 | 26 | 1.4482 | 0.0526 | 0.0052 | 0.0095 | 0.6985 | | No log | 3.0 | 39 | 1.0747 | 0.0417 | 0.0105 | 0.0167 | 0.7429 | | No log | 4.0 | 52 | 0.8462 | 0.2262 | 0.0995 | 0.1382 | 0.7821 | | No log | 5.0 | 65 | 0.6852 | 0.3290 | 0.2670 | 0.2948 | 0.8172 | | No log | 6.0 | 78 | 0.5970 | 0.4346 | 0.4869 | 0.4593 | 0.8684 | | No log | 7.0 | 91 | 0.5108 | 0.5072 | 0.5497 | 0.5276 | 0.8880 | | No log | 8.0 | 104 | 0.4515 | 0.5882 | 0.6283 | 0.6076 | 0.9086 | | No log | 9.0 | 117 | 0.4105 | 0.6305 | 0.6702 | 0.6497 | 0.9169 | | No log | 10.0 | 130 | 0.3755 | 0.7120 | 0.6859 | 0.6987 | 0.9293 | | No log | 11.0 | 143 | 0.3661 | 0.7243 | 0.7016 | 0.7128 | 0.9293 | | No log | 12.0 | 156 | 0.3460 | 0.7273 | 0.7120 | 0.7196 | 0.9313 | | No log | 13.0 | 169 | 0.3287 | 0.7609 | 0.7330 | 0.7467 | 0.9355 | | No log | 14.0 | 182 | 0.3177 | 0.7701 | 0.7539 | 0.7619 | 0.9370 | | No log | 15.0 | 195 | 0.3133 | 0.7705 | 0.7382 | 0.7540 | 0.9360 | | No log | 16.0 | 208 | 0.3028 | 0.7826 | 0.7539 | 0.7680 | 0.9406 | | No log | 17.0 | 221 | 0.3062 | 0.7944 | 0.7487 | 0.7709 | 0.9391 | | No log | 18.0 | 234 | 0.3015 | 0.8011 | 0.7592 | 0.7796 | 0.9411 | | No log | 19.0 | 247 | 0.2997 | 0.7935 | 0.7644 | 0.7787 | 0.9432 | | No log | 20.0 | 260 | 0.2988 | 0.8066 | 0.7644 | 0.7849 | 0.9432 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cpu - Datasets 2.19.1 - Tokenizers 0.19.1
async0x42/MN-12B-Lyra-v4-exl2_8bpw
async0x42
2024-09-09T17:09:16Z
5
0
null
[ "safetensors", "mistral", "en", "license:cc-by-nc-4.0", "8-bit", "exl2", "region:us" ]
null
2024-09-09T15:24:33Z
--- license: cc-by-nc-4.0 language: - en --- ![Lyra](https://huggingface.co/Sao10K/MN-12B-Lyra-v4/resolve/main/lyra.png) Mistral-NeMo-12B-Lyra-v4, a variation of [Lyra-v4a1](https://huggingface.co/Sao10K/MN-12B-Lyra-v4a1), layered over [Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3), which was built on top of [Lyra-v2a2](https://huggingface.co/Sao10K/MN-12B-Lyra-v2a2), which itself was built upon [Lyra-v2a1](https://huggingface.co/Sao10K/MN-12B-Lyra-v2a1). # Model Versioning ``` [See Previous Models] | Lyra-v4a1 | ------------> Lyra-v4 [Seperate RL Step targeting Instruct and Coherency over Base Nemo instead of SFT First, Result is Merged with Lyra-v4a1, fixes most quant-based issues. Somehow.] ``` # This uses ChatML, or any of its variants which were included in previous versions. ``` <|im_start|>system This is the system prompt.<|im_end|> <|im_start|>user Instructions placed here.<|im_end|> <|im_start|>assistant The model's response will be here.<|im_end|> -------------------------------------------------- [INST]system This is another system prompt.[/INST] [INST]user Your instructions placed here.[/INST] [INST]assistant The model's response will be here.[/INST] ``` # Recommended Samplers: ``` Temperature: 0.6 - 1 # Make sure min_p is set before Temperature in Sampler Orders min_p: 0.1 - 0.2 # Crucial for NeMo ``` # Recommended Stopping Strings: ``` <|im_end|> </s> [/INST] ``` # Notes \- I think I fixed the extra token stuff some users seem to be facing, while retaining everything else? It's some error alright. <br>\- If you're using XML tags, you may see weird malformed stopping strings. Just add them to your current list. and move on. <br>\- Its pretty nice, imo. I've been messing around with it a lot. <br>\- Make sure the ChatML template is correct, I think there's some issues with the one used in SillyTavern which might cause improper replies?
Rodrigo1771/bioBIT-drugtemist-it-fasttext-8-ner
Rodrigo1771
2024-09-09T17:05:50Z
52
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:Rodrigo1771/drugtemist-it-fasttext-8-ner", "base_model:IVN-RIN/bioBIT", "base_model:finetune:IVN-RIN/bioBIT", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-09-09T16:21:06Z
--- library_name: transformers base_model: IVN-RIN/bioBIT tags: - token-classification - generated_from_trainer datasets: - Rodrigo1771/drugtemist-it-fasttext-8-ner metrics: - precision - recall - f1 - accuracy model-index: - name: output results: - task: name: Token Classification type: token-classification dataset: name: Rodrigo1771/drugtemist-it-fasttext-8-ner type: Rodrigo1771/drugtemist-it-fasttext-8-ner config: DrugTEMIST Italian NER split: validation args: DrugTEMIST Italian NER metrics: - name: Precision type: precision value: 0.9162702188392008 - name: Recall type: recall value: 0.9322362052274927 - name: F1 type: f1 value: 0.9241842610364683 - name: Accuracy type: accuracy value: 0.9987276032199429 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [IVN-RIN/bioBIT](https://huggingface.co/IVN-RIN/bioBIT) on the Rodrigo1771/drugtemist-it-fasttext-8-ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0064 - Precision: 0.9163 - Recall: 0.9322 - F1: 0.9242 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 470 | 0.0044 | 0.9108 | 0.8993 | 0.9050 | 0.9985 | | 0.0122 | 2.0 | 940 | 0.0051 | 0.9050 | 0.8848 | 0.8948 | 0.9984 | | 0.0032 | 3.0 | 1410 | 0.0049 | 0.9144 | 0.8993 | 0.9068 | 0.9985 | | 0.0017 | 4.0 | 1880 | 0.0060 | 0.9213 | 0.9177 | 0.9195 | 0.9986 | | 0.0011 | 5.0 | 2350 | 0.0071 | 0.9280 | 0.8858 | 0.9064 | 0.9985 | | 0.0007 | 6.0 | 2820 | 0.0060 | 0.9078 | 0.9245 | 0.9161 | 0.9986 | | 0.0005 | 7.0 | 3290 | 0.0059 | 0.9260 | 0.9206 | 0.9233 | 0.9988 | | 0.0004 | 8.0 | 3760 | 0.0064 | 0.9163 | 0.9322 | 0.9242 | 0.9987 | | 0.0002 | 9.0 | 4230 | 0.0067 | 0.9177 | 0.9284 | 0.9230 | 0.9986 | | 0.0001 | 10.0 | 4700 | 0.0069 | 0.9152 | 0.9303 | 0.9227 | 0.9987 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
passionMan/Llama-3-bllossom-8B-PM1-finetuned-v1-15-2
passionMan
2024-09-09T17:04:49Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T17:00:19Z
--- base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** passionMan - **License:** apache-2.0 - **Finetuned from model :** MLP-KTLim/llama-3-Korean-Bllossom-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
arcee-ai/Llama-Spark
arcee-ai
2024-09-09T17:04:16Z
2,888
27
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-07-26T06:36:47Z
--- license: llama3 --- <div align="center"> <img src="https://i.ibb.co/9hwFrvL/BLMs-Wkx-NQf-W-46-FZDg-ILhg.jpg" alt="Arcee Spark" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;"> </div> Llama-Spark is a powerful conversational AI model developed by Arcee.ai. It's built on the foundation of Llama-3.1-8B and merges the power of our Tome Dataset with Llama-3.1-8B-Instruct, resulting in a remarkable conversationalist that punches well above its 8B parameter weight class. ## GGUFs available [here](https://huggingface.co/arcee-ai/Llama-Spark-GGUF) ## Model Description Llama-Spark is our commitment to consistently delivering the best-performing conversational AI in the 6-9B parameter range. As new base models become available, we'll continue to update and improve Spark to maintain its leadership position. This model is a successor to our original Arcee-Spark, incorporating advancements and learnings from our ongoing research and development. ## Intended Uses Llama-Spark is intended for use in conversational AI applications, such as chatbots, virtual assistants, and dialogue systems. It excels at engaging in natural and informative conversations. ## Training Information Llama-Spark is built upon the Llama-3.1-8B base model, fine-tuned using of the Tome Dataset and merged with Llama-3.1-8B-Instruct. ## Acknowledgements We extend our deepest gratitude to **PrimeIntellect** for being our compute sponsor for this project. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_arcee-ai__Llama-Spark) | Metric |Value| |-------------------|----:| |Avg. |24.90| |IFEval (0-Shot) |79.11| |BBH (3-Shot) |29.77| |MATH Lvl 5 (4-Shot)| 1.06| |GPQA (0-shot) | 6.60| |MuSR (0-shot) | 2.62| |MMLU-PRO (5-shot) |30.23|
Ahmad0067/llama-3-8b-Instruct-moe-ref-blood-pres_Unsloth_correct_v2
Ahmad0067
2024-09-09T17:01:47Z
6
0
null
[ "safetensors", "mixtral", "region:us" ]
null
2024-09-09T16:57:45Z
--- {} --- --- license: apache-2.0 tags: - moe - mergekit - Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged - Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged --- # llama-3-8b-Instruct-moe-ref-blood-pres_Unsloth_correct_v2 llama-3-8b-Instruct-moe-ref-blood-pres_Unsloth_correct_v2 is a Mixture of Experts (MoE) model, configured as follows: * [Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged](https://huggingface.co/Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged) - **Positive Prompts**: Expert on Referral Orders extraction. - **Negative Prompts**: NOT good for Bloodwork Orders., NOT good for Prescription Orders. * [Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged](https://huggingface.co/Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged) - **Positive Prompts**: Expert on Prescription Orders extraction. - **Negative Prompts**: NOT good for Bloodwork Orders., NOT good for Referal Orders. ## 🧩 Configuration ```yaml base_model: unsloth/llama-3-8b-Instruct gate_mode: hidden dtype: bfloat16 experts: - source_model: Ahmad0067/llama-3-8b-Instruct-Referral_Synth_data_Phase_1_and_2_corect_unsloth_merged positive_prompts: - Expert on Referral Orders extraction. negative_prompts: - NOT good for Bloodwork Orders. - NOT good for Prescription Orders. - source_model: Ahmad0067/llama-3-8b-Instruct-Prescriptin_Synth_data_Phase_1_and_2_corect_unsloth_merged positive_prompts: - Expert on Prescription Orders extraction. negative_prompts: - NOT good for Bloodwork Orders. - NOT good for Referal Orders. ```
async0x42/NemoMix-Unleashed-12B-exl2_8bpw
async0x42
2024-09-09T16:49:49Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-09-09T16:16:52Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/zYBXSewLbIxWHZdB3oEHs.jpeg) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/eRwPcd9Ox03hn_WRnsotj.png) # Information ## Details Okay, I tried really hard to improve my ChatML merges, but that has gone terribly wrong. Everyone is adding special tokens with different IDs so can't even make a proper union tokenizer for them, damn. Not to mention, I made some... interesting discoveres in regards to some models' context lenghts. You can watch the breakdown of how it went down here: https://www.captiongenerator.com/v/2303039/marinaraspaghetti's-merging-experience. This one feels a bit different to my previous attempts and seems less prone to repetition, especially on higher contexts, which is great for me! I'll probably improve on it even further, but for now, it feels rather nice. Great for RP and storytelling. All credits and thanks go to the amazing MistralAI, Intervitens, Sao10K and Nbeerbower for their amazing models! Plus, special shoutouts to Parasitic Rogue for ideas and Prodeus Unity and Statuo for cool exl2 quants of my previous merges. Cheers to folks over at the Drummer's server! Have a good one, everyone. ## Instruct ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/JtOSIRNnMdGNycWACobO2.gif) *Sigh,* Mistral Instruct, I'm afraid. UPDATE: WE HAD THE WRONG FORMAT ALL ALONG, JUST RECEIVED HOW IT'S SUPPOSED TO LOOK LIKE FROM THE OFFICIAL MISTRALAI TEAM MEMBER. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550b16f7490049d6237f200/lAGz_ITUkVaLMvm_0AXuj.png) ...This had made me question everything I thought I knew. ``` <s>[INST]{system}[/INST]{response}</s>[INST]{user's message}[/INST]{response}</s> ``` ## Parameters I recommend running Temperature 1.0-1.25 with 0.1 Top A or 0.01-0.1 Min P, and with 0.8/1.75/2/0 DRY. Also works with lower Temperatures below 1.0. Nothing more needed. ### Settings You can use my exact settings from here (use the ones from the Mistral Base/Customized folder, I also recommend checking the Mistral Improved folder): https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main. ## GGUF https://huggingface.co/bartowski/NemoMix-Unleashed-12B-GGUF ## EXL2 https://huggingface.co/Statuo/NemoMix-Unleashed-EXL2-8bpw # NemoMix-Unleashed-12B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using E:\mergekit\mistralaiMistral-Nemo-Base-2407 as a base. ### Models Merged The following models were included in the merge: * E:\mergekit\intervitens_mini-magnum-12b-v1.1 * E:\mergekit\nbeerbower_mistral-nemo-bophades-12B * E:\mergekit\Sao10K_MN-12B-Lyra-v1 * E:\mergekit\nbeerbower_mistral-nemo-gutenberg-12B * E:\mergekit\mistralaiMistral-Nemo-Instruct-2407 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: E:\mergekit\mistralaiMistral-Nemo-Instruct-2407 parameters: weight: 0.1 density: 0.4 - model: E:\mergekit\nbeerbower_mistral-nemo-bophades-12B parameters: weight: 0.12 density: 0.5 - model: E:\mergekit\nbeerbower_mistral-nemo-gutenberg-12B parameters: weight: 0.2 density: 0.6 - model: E:\mergekit\Sao10K_MN-12B-Lyra-v1 parameters: weight: 0.25 density: 0.7 - model: E:\mergekit\intervitens_mini-magnum-12b-v1.1 parameters: weight: 0.33 density: 0.8 merge_method: della_linear base_model: E:\mergekit\mistralaiMistral-Nemo-Base-2407 parameters: epsilon: 0.05 lambda: 1 dtype: bfloat16 tokenizer_source: base ``` # Ko-fi ## Enjoying what I do? Consider donating here, thank you! https://ko-fi.com/spicy_marinara
modularai/llama-3.1
modularai
2024-09-09T16:49:02Z
630,816
0
null
[ "gguf", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "arxiv:2204.05149", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3", "region:us" ]
text-generation
2024-09-04T18:48:44Z
--- language: - en license: llama3 tags: - facebook - meta - pytorch - llama - llama-3 model_name: Llama 3.1 8B base_model: meta-llama/Meta-Llama-3.1-8B-Instruct inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation --- # Llama 3.1 8B Instruct - GGUF - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [Llama 3 8B Instruct](meta-llama/Meta-Llama-3-8B-Instruct) - GGUF models quantized by: - [bartowski](https://huggingface.co/bartowski) at [bartowski/Meta-Llama-3.1-8B-Instruct-GGUF](https://huggingface.co/bartowski/Meta-Llama-3.1-8B-Instruct-GGUF) - [kaetemi](https://huggingface.co/kaetemi) at [kaetemi/Meta-Llama-3.1-8B-Q4_0-GGUF](https://huggingface.co/kaetemi/Meta-Llama-3.1-8B-Q4_0-GGUF) - [ddh0](https://huggingface.co/ddh0) at [bullerwins/Meta-Llama-3.1-8B-Instruct-GGU](https://huggingface.co/bullerwins/Meta-Llama-3.1-8B-Instruct-GGU) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's Llama 3.1 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). <!-- description end --> <!-- original-model-card start --> ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Tool use with transformers LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/). Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers. Here is a quick example showing a single simple tool: ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "system", "content": "You are a bot that responds to weather queries."}, {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling). ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>41.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development. <!-- original-model-card end -->
AquilaX-AI/NL-JSON-Start-Scan
AquilaX-AI
2024-09-09T16:46:03Z
71
2
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-07-23T12:28:51Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text2text-generation --- # AquilaX-NL-JSON-Start-Scan ## Overview AquilaX-NL-JSON-Start-Scan is a model built using Hugging Face's T5-small to convert natural language queries about vulnerabilities into JSON queries for MongoDB. ## Model Information ### Model - **Name**: AquilaX-NL-JSON-Start-Scan - **Architecture**: T5-small - **Framework**: Hugging Face Transformers ### Description The AquilaX-NL-JSON-Start-Scan model is designed to interpret natural language queries related to vulnerabilities in code and convert them into JSON queries that can be executed on a MongoDB database. This facilitates automated scanning and analysis of code repositories for security issues. The model leverages the capabilities of the T5-small architecture, which is well-suited for natural language understanding and generation tasks. # Getting Started ## Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers[torch]`, then copy the snippet from the section. ## Requirements ```bash pip install transformers[torch] ``` ## Inference Code ```python import json import requests def convert_to_json(answer): """ Convert a string representation of a dictionary to a JSON object. This function takes a string representation of a dictionary, cleans it by removing specific unwanted tokens and correcting boolean representations, and then converts it into a JSON object. Parameters: answer (str): The input string representing a dictionary. Returns: dict: The JSON object converted from the input string. """ answer = answer.replace("<pad>", "").replace("</s>", "") answer = answer.strip("'") answer = answer.replace("false", "False").replace("true", "True") answer_dict = eval(answer) answer_json = json.dumps(answer_dict) json_data = json.loads(answer_json) return json_data def valid_url(url): """ Validate the given URL against a list of supported platforms. This function checks if the provided URL belongs to one of the supported platforms for scanning. If the URL is valid, it returns True. Otherwise, it returns a message indicating that the URL is not supported and lists the available scanners. Parameters: url (str): The URL to be validated. Returns: bool or dict: Returns True if the URL is valid, otherwise returns a dictionary with a message indicating the URL is not supported and lists the available scanners. """ valid_list = [ "github.com", "bitbucket.org", "sourceforge.net", "aws.amazon.com", "dev.azure.com", "gitea.com", "gogs.io", "phabricator.com", "gitkraken.com", "beanstalkapp.com", "gitlab.com" ] platform = url.split("//")[1].split("/")[0] if platform in valid_list: return True return { 'message': 'Provide a valid URL for scanning. Currently, we support PII_Scanner, SAST_Scanner, Sac_Scanner (Open_Source_Security), IaC_Scanner, Container_Scanner' } ``` ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch import time tokenizer = AutoTokenizer.from_pretrained("AquilaX-AI/NL-JSON-Start-Scan") model = AutoModelForSeq2SeqLM.from_pretrained("AquilaX-AI/NL-JSON-Start-Scan") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Change YOUR_QUERY eg: can this https://github.com/mr-vicky-01/educational-assitant on every week using pii and sast scan query = "Translate the following text to JSON: " + "YOUR_QUERY".lower() query = query.replace(",", "") start = time.time() inputs = tokenizer(query, return_tensors="pt") model.to(device) inputs = inputs.to(device) outputs = model.generate(**inputs, max_length=256) answer = tokenizer.decode(outputs[0]) try: json_data = convert_to_json(answer) except: json_data = {'message': 'We encountered an issue with your query. Please use the Personalized Scan option for accurate results.'} to_return = json_data.copy() try: valid = valid_url(json_data["repo"]) if valid != True: to_return = valid else: url = re.findall(r'https?://\S+', query) to_return['repo'] = url except: pass end = time.time() print(to_return) print(f"Time taken: {end - start}") ``` ## License This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details. ## Authors - [Aquilax-Ai](https://huggingface.co/AquilaX-AI) - [Suriya](https://huggingface.co/suriya7) - [Vicky](https://huggingface.co/Mr-Vicky-01) ## Acknowledgments - Hugging Face for the Transformers library.
mradermacher/Phind-CodeLlama-34B-v1-GGUF
mradermacher
2024-09-09T16:38:35Z
137
0
transformers
[ "transformers", "gguf", "code llama", "en", "base_model:Phind/Phind-CodeLlama-34B-v1", "base_model:quantized:Phind/Phind-CodeLlama-34B-v1", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-09-09T01:49:27Z
--- base_model: Phind/Phind-CodeLlama-34B-v1 language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - code llama --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Phind/Phind-CodeLlama-34B-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q2_K.gguf) | Q2_K | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.IQ3_XS.gguf) | IQ3_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q3_K_S.gguf) | Q3_K_S | 14.7 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.IQ3_S.gguf) | IQ3_S | 14.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.IQ3_M.gguf) | IQ3_M | 15.3 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q3_K_M.gguf) | Q3_K_M | 16.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q3_K_L.gguf) | Q3_K_L | 17.9 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.IQ4_XS.gguf) | IQ4_XS | 18.3 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q4_K_S.gguf) | Q4_K_S | 19.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q4_K_M.gguf) | Q4_K_M | 20.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q5_K_S.gguf) | Q5_K_S | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q5_K_M.gguf) | Q5_K_M | 23.9 | | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q6_K.gguf) | Q6_K | 27.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Phind-CodeLlama-34B-v1-GGUF/resolve/main/Phind-CodeLlama-34B-v1.Q8_0.gguf) | Q8_0 | 36.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
yangzhao02/mistral-7b-lambdarank
yangzhao02
2024-09-09T16:34:51Z
5
0
null
[ "safetensors", "mistral", "alignment-handbook", "ndcg", "trl", "generated_from_trainer", "dataset:yangzhao02/ListUltraFeedback", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:finetune:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-09-09T03:52:46Z
--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - alignment-handbook - ndcg - trl - generated_from_trainer - trl - ndcg - generated_from_trainer datasets: - yangzhao02/ListUltraFeedback model-index: - name: mistral-7b-base-lipo-listsize_8-beta_0.05-batchsize_128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhaoyang1/huggingface/runs/qrzgbz3l) # mistral-7b-base-lipo-listsize_8-beta_0.05-batchsize_128 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the yangzhao02/ListUltraFeedback dataset. It achieves the following results on the evaluation set: - Loss: 0.0020 - Logps: -326.4341 - Logits: -1.8607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Logps | Logits | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.0024 | 0.5343 | 250 | 0.0021 | -330.3465 | -1.7109 | ### Framework versions - Transformers 4.42.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
sanchit42/mistral-7b_cced2_2
sanchit42
2024-09-09T16:30:18Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T16:26:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
izaitova/liberta-large-topic_classification
izaitova
2024-09-09T16:12:05Z
7
0
null
[ "tensorboard", "safetensors", "bert", "generated_from_trainer", "base_model:Goader/liberta-large", "base_model:finetune:Goader/liberta-large", "license:cc-by-4.0", "region:us" ]
null
2024-09-09T16:04:12Z
--- license: cc-by-4.0 base_model: Goader/liberta-large tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: liberta-large-topic_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # liberta-large-topic_classification This model is a fine-tuned version of [Goader/liberta-large](https://huggingface.co/Goader/liberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7957 - Precision: 0.9167 - Recall: 0.8749 - F1: 0.8889 - Accuracy: 0.8971 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 88 | 0.7214 | 0.8294 | 0.7438 | 0.7532 | 0.7843 | | No log | 2.0 | 176 | 0.6388 | 0.8181 | 0.7797 | 0.7826 | 0.8088 | | No log | 3.0 | 264 | 0.8149 | 0.8625 | 0.8692 | 0.8617 | 0.8725 | | No log | 4.0 | 352 | 0.8210 | 0.9171 | 0.8603 | 0.8695 | 0.8824 | | No log | 5.0 | 440 | 0.7850 | 0.9173 | 0.8700 | 0.8841 | 0.8922 | | 0.3285 | 6.0 | 528 | 0.7936 | 0.8987 | 0.8670 | 0.8770 | 0.8824 | | 0.3285 | 7.0 | 616 | 0.7794 | 0.9217 | 0.8749 | 0.8913 | 0.8971 | | 0.3285 | 8.0 | 704 | 0.7835 | 0.9217 | 0.8749 | 0.8913 | 0.8971 | | 0.3285 | 9.0 | 792 | 0.7947 | 0.9167 | 0.8749 | 0.8889 | 0.8971 | | 0.3285 | 10.0 | 880 | 0.7957 | 0.9167 | 0.8749 | 0.8889 | 0.8971 | ### Framework versions - Transformers 4.39.3 - Pytorch 1.11.0a0+17540c5 - Datasets 2.21.0 - Tokenizers 0.15.2
RichardErkhov/yam-peleg_-_Experiment24-7B-gguf
RichardErkhov
2024-09-09T16:09:25Z
22
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-09-09T10:52:34Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Experiment24-7B - GGUF - Model creator: https://huggingface.co/yam-peleg/ - Original model: https://huggingface.co/yam-peleg/Experiment24-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Experiment24-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [Experiment24-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Experiment24-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Experiment24-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Experiment24-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Experiment24-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [Experiment24-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Experiment24-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Experiment24-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Experiment24-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [Experiment24-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Experiment24-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Experiment24-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [Experiment24-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Experiment24-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [Experiment24-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [Experiment24-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Experiment24-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [Experiment24-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Experiment24-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [Experiment24-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [Experiment24-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/yam-peleg_-_Experiment24-7B-gguf/blob/main/Experiment24-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - chat --- **Experiment24-7B** An experiment for testing and refining a specific training and evaluation pipeline research framework. This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance. The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs. The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement. More details in the future experiments. --- license: apache-2.0 ---
rttrsabc/CogVideoX-2b
rttrsabc
2024-09-09T16:00:29Z
6
1
diffusers
[ "diffusers", "safetensors", "cogvideox", "video-generation", "thudm", "text-to-video", "en", "arxiv:2408.06072", "license:apache-2.0", "diffusers:CogVideoXPipeline", "region:us" ]
text-to-video
2024-09-09T16:00:29Z
--- license: apache-2.0 language: - en tags: - cogvideox - video-generation - thudm - text-to-video inference: false --- # CogVideoX-2B <p style="text-align: center;"> <div align="center"> <img src=https://github.com/THUDM/CogVideo/raw/main/resources/logo.svg width="50%"/> </div> <p align="center"> <a href="https://huggingface.co/THUDM/CogVideoX-2b/blob/main/README_zh.md">📄 中文阅读</a> | <a href="https://huggingface.co/spaces/THUDM/CogVideoX-2B-Space">🤗 Huggingface Space</a> | <a href="https://github.com/THUDM/CogVideo">🌐 Github </a> | <a href="https://arxiv.org/pdf/2408.06072">📜 arxiv </a> </p> <p align="center"> 📍 Visit <a href="https://chatglm.cn/video?lang=en?fr=osm_cogvideo">QingYing</a> and <a href="https://open.bigmodel.cn/?utm_campaign=open&_channel_track_key=OWTVNma9">API Platform</a> to experience commercial video generation models. </p> ## Demo Show <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Video Gallery with Captions</title> <style> .video-container { display: flex; flex-wrap: wrap; justify-content: space-around; } .video-item { width: 45%; margin-bottom: 20px; transition: transform 0.3s; } .video-item:hover { transform: scale(1.1); } .caption { text-align: center; margin-top: 10px; font-size: 11px; } </style> </head> <body> <div class="video-container"> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/1.mp4" type="video/mp4"> </video> <div class="caption">A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting.</div> </div> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/2.mp4" type="video/mp4"> </video> <div class="caption">The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it’s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds.</div> </div> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/3.mp4" type="video/mp4"> </video> <div class="caption">A street artist, clad in a worn-out denim jacket and a colorful bandana, stands before a vast concrete wall in the heart, holding a can of spray paint, spray-painting a colorful bird on a mottled wall.</div> </div> <div class="video-item"> <video width="100%" controls> <source src="https://github.com/THUDM/CogVideo/raw/main/resources/videos/4.mp4" type="video/mp4"> </video> <div class="caption"> In the haunting backdrop of a war-torn city, where ruins and crumbled walls tell a story of devastation, a poignant close-up frames a young girl. Her face is smudged with ash, a silent testament to the chaos around her. Her eyes glistening with a mix of sorrow and resilience, capturing the raw emotion of a world that has lost its innocence to the ravages of conflict.</div> </div> </div> </body> </html> ## Model Introduction CogVideoX is an open-source version of the video generation model originating from [QingYing](https://chatglm.cn/video?lang=en?fr=osm_cogvideo). The table below displays the list of video generation models we currently offer, along with their foundational information. <table style="border-collapse: collapse; width: 100%;"> <tr> <th style="text-align: center;">Model Name</th> <th style="text-align: center;">CogVideoX-2B (This Repository)</th> <th style="text-align: center;">CogVideoX-5B</th> </tr> <tr> <td style="text-align: center;">Model Description</td> <td style="text-align: center;">Entry-level model, balancing compatibility. Low cost for running and secondary development.</td> <td style="text-align: center;">Larger model with higher video generation quality and better visual effects.</td> </tr> <tr> <td style="text-align: center;">Inference Precision</td> <td style="text-align: center;"><b>FP16* (Recommended)</b>, BF16, FP32, FP8*, INT8, no support for INT4</td> <td style="text-align: center;"><b>BF16 (Recommended)</b>, FP16, FP32, FP8*, INT8, no support for INT4</td> </tr> <tr> <td style="text-align: center;">Single GPU VRAM Consumption<br></td> <td style="text-align: center;"><a href="https://github.com/THUDM/SwissArmyTransformer">SAT</a> FP16: 18GB <br><b>diffusers FP16: starting from 4GB*</b><br><b>diffusers INT8(torchao): starting from 3.6GB*</b></td> <td style="text-align: center;"><a href="https://github.com/THUDM/SwissArmyTransformer">SAT</a> BF16: 26GB <br><b>diffusers BF16: starting from 5GB*</b><br><b>diffusers INT8(torchao): starting from 4.4GB*</b></td> </tr> <tr> <td style="text-align: center;">Multi-GPU Inference VRAM Consumption</td> <td style="text-align: center;"><b>FP16: 10GB* using diffusers</b></td> <td style="text-align: center;"><b>BF16: 15GB* using diffusers</b></td> </tr> <tr> <td style="text-align: center;">Inference Speed<br>(Step = 50, FP/BF16)</td> <td style="text-align: center;">Single A100: ~90 seconds<br>Single H100: ~45 seconds</td> <td style="text-align: center;">Single A100: ~180 seconds<br>Single H100: ~90 seconds</td> </tr> <tr> <td style="text-align: center;">Fine-tuning Precision</td> <td style="text-align: center;"><b>FP16</b></td> <td style="text-align: center;"><b>BF16</b></td> </tr> <tr> <td style="text-align: center;">Fine-tuning VRAM Consumption (per GPU)</td> <td style="text-align: center;">47 GB (bs=1, LORA)<br> 61 GB (bs=2, LORA)<br> 62GB (bs=1, SFT)</td> <td style="text-align: center;">63 GB (bs=1, LORA)<br> 80 GB (bs=2, LORA)<br> 75GB (bs=1, SFT)</td> </tr> <tr> <td style="text-align: center;">Prompt Language</td> <td colspan="2" style="text-align: center;">English*</td> </tr> <tr> <td style="text-align: center;">Prompt Length Limit</td> <td colspan="2" style="text-align: center;">226 Tokens</td> </tr> <tr> <td style="text-align: center;">Video Length</td> <td colspan="2" style="text-align: center;">6 Seconds</td> </tr> <tr> <td style="text-align: center;">Frame Rate</td> <td colspan="2" style="text-align: center;">8 Frames per Second</td> </tr> <tr> <td style="text-align: center;">Video Resolution</td> <td colspan="2" style="text-align: center;">720 x 480, no support for other resolutions (including fine-tuning)</td> </tr> <tr> <td style="text-align: center;">Positional Encoding</td> <td style="text-align: center;">3d_sincos_pos_embed</td> <td style="text-align: center;">3d_rope_pos_embed</td> </tr> </table> **Data Explanation** + When testing using the `diffusers` library, all optimizations provided by the `diffusers` library were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than **NVIDIA A100 / H100**. Generally, this solution can be adapted to all devices with **NVIDIA Ampere architecture** and above. If the optimizations are disabled, VRAM usage will increase significantly, with peak VRAM usage being about 3 times higher than the table shows. However, speed will increase by 3-4 times. You can selectively disable some optimizations, including: ``` pipe.enable_model_cpu_offload() pipe.enable_sequential_cpu_offload() pipe.vae.enable_slicing() pipe.vae.enable_tiling() ``` + When performing multi-GPU inference, the `enable_model_cpu_offload()` optimization needs to be disabled. + Using INT8 models will reduce inference speed. This is to ensure that GPUs with lower VRAM can perform inference normally while maintaining minimal video quality loss, though inference speed will decrease significantly. + The 2B model is trained with `FP16` precision, and the 5B model is trained with `BF16` precision. We recommend using the precision the model was trained with for inference. + [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be used to quantize the text encoder, Transformer, and VAE modules to reduce CogVideoX's memory requirements. This makes it possible to run the model on a free T4 Colab or GPUs with smaller VRAM! It is also worth noting that TorchAO quantization is fully compatible with `torch.compile`, which can significantly improve inference speed. `FP8` precision must be used on devices with `NVIDIA H100` or above, which requires installing the `torch`, `torchao`, `diffusers`, and `accelerate` Python packages from source. `CUDA 12.4` is recommended. + The inference speed test also used the above VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only the `diffusers` version of the model supports quantization. + The model only supports English input; other languages can be translated into English during refinement by a large model. **Note** + Using [SAT](https://github.com/THUDM/SwissArmyTransformer) for inference and fine-tuning of SAT version models. Feel free to visit our GitHub for more information. ## Quick Start 🤗 This model supports deployment using the huggingface diffusers library. You can deploy it by following these steps. **We recommend that you visit our [GitHub](https://github.com/THUDM/CogVideo) and check out the relevant prompt optimizations and conversions to get a better experience.** 1. Install the required dependencies ```shell # diffusers>=0.30.1 # transformers>=0.44.0 # accelerate>=0.33.0 (suggest install from source) # imageio-ffmpeg>=0.5.1 pip install --upgrade transformers accelerate diffusers imageio-ffmpeg ``` 2. Run the code ```python import torch from diffusers import CogVideoXPipeline from diffusers.utils import export_to_video prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance." pipe = CogVideoXPipeline.from_pretrained( "THUDM/CogVideoX-2b", torch_dtype=torch.float16 ) pipe.enable_model_cpu_offload() pipe.enable_sequential_cpu_offload() pipe.vae.enable_slicing() pipe.vae.enable_tiling() video = pipe( prompt=prompt, num_videos_per_prompt=1, num_inference_steps=50, num_frames=49, guidance_scale=6, generator=torch.Generator(device="cuda").manual_seed(42), ).frames[0] export_to_video(video, "output.mp4", fps=8) ``` ## Quantized Inference [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be used to quantize the Text Encoder, Transformer and VAE modules to lower the memory requirement of CogVideoX. This makes it possible to run the model on free-tier T4 Colab or smaller VRAM GPUs as well! It is also worth noting that TorchAO quantization is fully compatible with `torch.compile`, which allows for much faster inference speed. ```diff # To get started, PytorchAO needs to be installed from the GitHub source and PyTorch Nightly. # Source and nightly installation is only required until next release. import torch from diffusers import AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, CogVideoXPipeline from diffusers.utils import export_to_video + from transformers import T5EncoderModel + from torchao.quantization import quantize_, int8_weight_only, int8_dynamic_activation_int8_weight + quantization = int8_weight_only + text_encoder = T5EncoderModel.from_pretrained("THUDM/CogVideoX-5b", subfolder="text_encoder", torch_dtype=torch.bfloat16) + quantize_(text_encoder, quantization()) + transformer = CogVideoXTransformer3DModel.from_pretrained("THUDM/CogVideoX-5b", subfolder="transformer", torch_dtype=torch.bfloat16) + quantize_(transformer, quantization()) + vae = AutoencoderKLCogVideoX.from_pretrained("THUDM/CogVideoX-2b", subfolder="vae", torch_dtype=torch.bfloat16) + quantize_(vae, quantization()) # Create pipeline and run inference pipe = CogVideoXPipeline.from_pretrained( "THUDM/CogVideoX-2b", + text_encoder=text_encoder, + transformer=transformer, + vae=vae, torch_dtype=torch.bfloat16, ) pipe.enable_model_cpu_offload() pipe.vae.enable_tiling() prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance." video = pipe( prompt=prompt, num_videos_per_prompt=1, num_inference_steps=50, num_frames=49, guidance_scale=6, generator=torch.Generator(device="cuda").manual_seed(42), ).frames[0] export_to_video(video, "output.mp4", fps=8) ``` Additionally, the models can be serialized and stored in a quantized datatype to save disk space when using PytorchAO. Find examples and benchmarks at these links: - [torchao](https://gist.github.com/a-r-r-o-w/4d9732d17412888c885480c6521a9897) - [quanto](https://gist.github.com/a-r-r-o-w/31be62828b00a9292821b85c1017effa) ## Explore the Model Welcome to our [github](https://github.com/THUDM/CogVideo), where you will find: 1. More detailed technical details and code explanation. 2. Optimization and conversion of prompt words. 3. Reasoning and fine-tuning of SAT version models, and even pre-release. 4. Project update log dynamics, more interactive opportunities. 5. CogVideoX toolchain to help you better use the model. 6. INT8 model inference code support. ## Model License The CogVideoX-2B model (including its corresponding Transformers module and VAE module) is released under the [Apache 2.0 License](LICENSE). The CogVideoX-5B model (Transformers module) is released under the [CogVideoX LICENSE](https://huggingface.co/THUDM/CogVideoX-5b/blob/main/LICENSE). ## Citation ``` @article{yang2024cogvideox, title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer}, author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others}, journal={arXiv preprint arXiv:2408.06072}, year={2024} } ```
KuanP/continual-pretrain-a100_large_epoch-lr2e-5-cw10.0-lg0.5.new_2024-09-09_fold_2
KuanP
2024-09-09T16:00:07Z
14
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-09-09T16:00:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KuanP/continual-pretrain-a100_large_epoch-lr2e-5-cw10.0-lg0.5.new_2024-09-09_fold_5
KuanP
2024-09-09T15:59:05Z
17
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-09-09T15:58:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JunxiongWang/MambaByte_Arxiv
JunxiongWang
2024-09-09T15:58:01Z
24
3
transformers
[ "transformers", "pytorch", "text-generation", "arxiv:2401.13660", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-04-22T22:35:30Z
--- license: apache-2.0 metrics: - perplexity pipeline_tag: text-generation --- Train in 30B Byte. Mode size 353M. Table 2 in [MambaByte](https://arxiv.org/abs/2401.13660) To use ``` import torch import numpy as np from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel model=MambaLMHeadModel.from_pretrained("JunxiongWang/MambaByte_Arxiv", device='cuda', dtype=torch.bfloat16) text = "\documentclass[12pt]{article}" text_byte = np.frombuffer(text.encode('utf-8'), dtype=np.uint8) input_ids = torch.from_numpy(text_byte[None, :].copy()).long().cuda() sample = model.generate( input_ids=input_ids, max_length=2048, cg=True, return_dict_in_generate=True, output_scores=True, enable_timing=True, temperature=1, top_k=256, top_p=0.9, ) print(bytes(sample.sequences[0].tolist()).decode('utf-8')) ``` Output: ``` \documentclass[12pt]{article}}}}^{{\mathbf{P}}\uplus{\mathbf{Q}}}}}}}{}}$ is a symmetric poset. This implies that $$\operatorname{end}({\mathscr{L}}) = \operatorname{end}({\mathscr{L}}\setminus\{\sigma_{{\mathbf{P}}}\}) = \operatorname{end}({\mathscr{L}}\setminus\{\sigma_{{\mathbf{Q}}}\}) = \operatorname{end}({\mathscr{L}}\setminus\{\sigma_{{\mathbf{P}}},\sigma_{{\mathbf{Q}}}\}),$$ i.e., ${\mathscr{L}}$ is $\{\sigma_{{\mathbf{P}}},\sigma_{{\mathbf{Q}}}\}$-bistochastic for any ${\mathbf{P}}\neq{\mathbf{Q}}$. Thus, ${\mathscr{L}}$ is reversible, and is in fact maximal among all $\{\sigma_{{\mathbf{P}}},\sigma_{{\mathbf{Q}}}\}$-bistochastic matrices. Since ${\mathscr{L}}$ is in the same class as $\sigma_{{\mathbf{P}}}^{{\mathbf{Q}}}$, we have $\operatorname{end}({\mathscr{L}})\subseteq\operatorname{end}({\mathscr{L}})$. Conversely, if $\operatorname{end}({\mathscr{L}})=\operatorname{end}({\mathscr{L}})$, then $\sigma_{{\mathbf{P}}}^{{\mathbf{Q}}}$ is maximal in $\operatorname{end}({\mathscr{L}})$. Since ${\mathbf{P}}\setminus\{\sigma_{{\mathbf{P}}}\}\subseteq\operatorname{end}({\mathscr{L}})$, this implies that ${\mathscr{L}}$ is in the same class as $\sigma_{{\mathbf{P}}}^{{\mathbf{Q}}}$, and hence ${\mathscr{L}}$ is reversible. We are now ready to show that $\{\sigma_{{\mathbf{P}}},\sigma_{{\mathbf{Q}}}\}$-bistochastic matrices form a symmetric poset of ends. \[lem:end\_symm\_class\] Let ${\mathbf{P}},{\mathbf{Q}}\in{\mathscr{M}}$. Then $\sigma_{{\mathbf{P}}}^{{\mathbf{Q}}}$ is symmetric if and only if $\operatorname{end}({\mathscr{L}})=\operatorname{end}({\mathscr{L}})$. Suppose that $\operatorname{end}({\mathscr{L}})=\operatorname{end}({\mathscr{L}})$, and we prove that $\sigma_{{\mathbf{P}}}^{{\mathbf{Q}}}$ is symmetric. Clearly, $\operatorname{end}({\mathscr{L}})$ contains exactly the ends of $\operatorname{end}({\mathscr{L}})$ by definition, and the only case that survives is when $\operatorname{end}({\mathscr{L}})=\operatorname{end}({\mathscr{L}})$. By construction, this means that $\sigma_{{\mathbf{P}}} ```
mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF
mradermacher
2024-09-09T15:53:25Z
87
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "nothingiisreal/MN-12B-Celeste-V1.9", "elinas/Chronos-Gold-12B-1.0", "en", "base_model:ThomasComics/MN-12B-Chronos-Gold-Celeste-v1", "base_model:quantized:ThomasComics/MN-12B-Chronos-Gold-Celeste-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T12:16:22Z
--- base_model: ThomasComics/MN-12B-Chronos-Gold-Celeste-v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - nothingiisreal/MN-12B-Celeste-V1.9 - elinas/Chronos-Gold-12B-1.0 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ThomasComics/MN-12B-Chronos-Gold-Celeste-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MN-12B-Chronos-Gold-Celeste-v1-GGUF/resolve/main/MN-12B-Chronos-Gold-Celeste-v1.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
JunxiongWang/MambaByte_PG19_353M
JunxiongWang
2024-09-09T15:51:31Z
43
0
transformers
[ "transformers", "pytorch", "text-generation", "dataset:pg19", "arxiv:2401.13660", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-04-22T22:36:56Z
--- license: apache-2.0 datasets: - pg19 metrics: - perplexity pipeline_tag: text-generation --- Train in 30B Byte. Mode size 353M. Table 2 in [MambaByte](https://arxiv.org/abs/2401.13660)
JunxiongWang/MambaByte_Books
JunxiongWang
2024-09-09T15:49:13Z
32
2
transformers
[ "transformers", "pytorch", "text-generation", "arxiv:2401.13660", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-04-22T22:36:20Z
--- license: apache-2.0 metrics: - perplexity pipeline_tag: text-generation --- Train in 30B Byte. Mode size 353M. Table 2 in [MambaByte](https://arxiv.org/abs/2401.13660) To use ``` import torch from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel import numpy as np model=MambaLMHeadModel.from_pretrained("JunxiongWang/MambaByte_Books", device='cuda', dtype=torch.float32) text = "In fair Verona, where we lay our scene, From ancient grudge, break to new mutiny, Where civil blood makes civil hands unclean." text_byte = np.frombuffer(text.encode('utf-8'), dtype=np.uint8) input_ids = torch.from_numpy(text_byte[None, :].copy()).long().cuda() sample = model.generate( input_ids=input_ids, max_length=2048, cg=True, return_dict_in_generate=True, output_scores=True, enable_timing=True, temperature=1, top_k=256, top_p=0.9, ) print(bytes(sample.sequences[0].tolist()).decode('utf-8')) ``` Output ``` In fair Verona, where we lay our scene, From ancient grudge, break to new mutiny, Where civil blood makes civil hands unclean. You are speaking truly, Marcheno . . . Here is a goodly show of arms we might have at you, That which is good we might do in each other's place. Your arms are too swift, O King, to make wars mean, too slow, Too fast without measure. What weapon could I wish that you and your brother should be better than this great king? I will buy such great arms as this, for this city where we lay our hands on them, made them our ends, Our hands . . ." On the rack, among the shards of crockery, we found a part of the face of the old king. We spit and cut ourselves with these and fought the like battle, and no difference of style or color marked our heads, so that the gods should have fallen to their knees in prayer. When we had gone away into the night, on our way back from the mountains of Montiagra, to dinner at the Palace of Ancestry, and back again to his chambers, Midas begged me to be so kind as to accept one of his books. I have bought it from the very best that the Princess of Orchis and I possess, and I am wondering that she should lend it to me. With her gifts are impressed the divinities of her forefathers, and with her own heart. THIS evening, when we came into the room together, there stood the King surrounded by his wives, his mistresses, and his concubines, and all his people, all his women, and all his friends. Some of them dropped their heads at his feet, their eyes were blank, and their nostrils were red; some held their hands in awkward loops, and held their faces high. As if they were sorry that they had not their true natures, they said to one another, "May the gods grant us this that we should not have been fortunate and sent these drowned children to our king to suffer." Some also said, "May the bride be with the groom too, for she hath the most wonderful thing in the world." But none of them was very anxious, because they did not know what they were to do. The King took his sw ```
mradermacher/CodeLlama-34b-hf-GGUF
mradermacher
2024-09-09T15:30:17Z
53
0
transformers
[ "transformers", "gguf", "llama-2", "code", "base_model:codellama/CodeLlama-34b-hf", "base_model:quantized:codellama/CodeLlama-34b-hf", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-09-09T03:40:46Z
--- base_model: codellama/CodeLlama-34b-hf language: - code library_name: transformers license: llama2 quantized_by: mradermacher tags: - llama-2 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/codellama/CodeLlama-34b-hf <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/CodeLlama-34b-hf-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q2_K.gguf) | Q2_K | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.IQ3_XS.gguf) | IQ3_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q3_K_S.gguf) | Q3_K_S | 14.7 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.IQ3_S.gguf) | IQ3_S | 14.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.IQ3_M.gguf) | IQ3_M | 15.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q3_K_M.gguf) | Q3_K_M | 16.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q3_K_L.gguf) | Q3_K_L | 17.9 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.IQ4_XS.gguf) | IQ4_XS | 18.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q4_K_S.gguf) | Q4_K_S | 19.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q4_K_M.gguf) | Q4_K_M | 20.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q5_K_S.gguf) | Q5_K_S | 23.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q5_K_M.gguf) | Q5_K_M | 23.9 | | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q6_K.gguf) | Q6_K | 27.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CodeLlama-34b-hf-GGUF/resolve/main/CodeLlama-34b-hf.Q8_0.gguf) | Q8_0 | 36.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
breco/whisper-tiny-few-reports
breco
2024-09-09T15:14:15Z
34
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "spa", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-09T14:54:32Z
--- library_name: transformers language: - spa license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper Tiny Few Reports - vfranchis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny Few Reports - vfranchis This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Few reports 1.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.9024 - Wer: 99.3499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 3.9187 | 3.0769 | 10 | 3.0330 | 102.4919 | | 2.7035 | 6.1538 | 20 | 2.1277 | 126.2189 | | 2.0516 | 9.2308 | 30 | 1.6559 | 82.9902 | | 1.5833 | 12.3077 | 40 | 1.3689 | 83.5320 | | 1.301 | 15.3846 | 50 | 1.1881 | 98.0498 | | 1.1102 | 18.4615 | 60 | 1.0681 | 111.9177 | | 1.0133 | 21.5385 | 70 | 0.9908 | 106.1755 | | 0.8926 | 24.6154 | 80 | 0.9402 | 100.3250 | | 0.8362 | 27.6923 | 90 | 0.9127 | 99.2416 | | 0.845 | 30.7692 | 100 | 0.9024 | 99.3499 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
DreadPoor/ScaduTorrent1.1-8b-model_stock
DreadPoor
2024-09-09T15:10:41Z
13
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:Blackroot/Llama-3-8B-Abomination-LORA", "base_model:merge:Blackroot/Llama-3-8B-Abomination-LORA", "base_model:Blackroot/Llama3-RP-Lora", "base_model:merge:Blackroot/Llama3-RP-Lora", "base_model:ResplendentAI/Llama3_RP_ORPO_LoRA", "base_model:merge:ResplendentAI/Llama3_RP_ORPO_LoRA", "base_model:failspy/Llama-3-8B-Instruct-MopeyMule", "base_model:merge:failspy/Llama-3-8B-Instruct-MopeyMule", "base_model:kloodia/lora-8b-math", "base_model:merge:kloodia/lora-8b-math", "base_model:zementalist/llama-3-8B-chat-psychotherapist", "base_model:merge:zementalist/llama-3-8B-chat-psychotherapist", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T15:02:37Z
--- base_model: - failspy/Llama-3-8B-Instruct-MopeyMule - kloodia/lora-8b-math - failspy/Llama-3-8B-Instruct-MopeyMule - Blackroot/Llama3-RP-Lora - failspy/Llama-3-8B-Instruct-MopeyMule - zementalist/llama-3-8B-chat-psychotherapist - failspy/Llama-3-8B-Instruct-MopeyMule - Blackroot/Llama-3-8B-Abomination-LORA - failspy/Llama-3-8B-Instruct-MopeyMule - ResplendentAI/Llama3_RP_ORPO_LoRA library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA) as a base. ### Models Merged The following models were included in the merge: * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) + [kloodia/lora-8b-math](https://huggingface.co/kloodia/lora-8b-math) * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) + [Blackroot/Llama3-RP-Lora](https://huggingface.co/Blackroot/Llama3-RP-Lora) * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) + [zementalist/llama-3-8B-chat-psychotherapist](https://huggingface.co/zementalist/llama-3-8B-chat-psychotherapist) * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) + [ResplendentAI/Llama3_RP_ORPO_LoRA](https://huggingface.co/ResplendentAI/Llama3_RP_ORPO_LoRA) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: failspy/Llama-3-8B-Instruct-MopeyMule+Blackroot/Llama3-RP-Lora - model: failspy/Llama-3-8B-Instruct-MopeyMule+zementalist/llama-3-8B-chat-psychotherapist - model: failspy/Llama-3-8B-Instruct-MopeyMule+ResplendentAI/Llama3_RP_ORPO_LoRA - model: failspy/Llama-3-8B-Instruct-MopeyMule+kloodia/lora-8b-math merge_method: model_stock base_model: failspy/Llama-3-8B-Instruct-MopeyMule+Blackroot/Llama-3-8B-Abomination-LORA normalize: false int8_mask: true dtype: bfloat16 ```
Luyen009/test-model
Luyen009
2024-09-09T15:09:25Z
71
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-08-01T17:29:17Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: test-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3947 - Accuracy: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.0551 | 0.9960 | 62 | 0.9662 | 0.5412 | | 0.9308 | 1.9920 | 124 | 0.8707 | 0.5433 | | 0.7962 | 2.9880 | 186 | 0.7354 | 0.6841 | | 0.7254 | 4.0 | 249 | 0.7400 | 0.7042 | | 0.5941 | 4.9960 | 311 | 0.5405 | 0.8169 | | 0.4736 | 5.9920 | 373 | 0.5045 | 0.8471 | | 0.4399 | 6.9880 | 435 | 0.4102 | 0.8813 | | 0.374 | 8.0 | 498 | 0.4052 | 0.8813 | | 0.3425 | 8.9960 | 560 | 0.4335 | 0.8632 | | 0.3171 | 9.9598 | 620 | 0.3947 | 0.8833 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
wu981526092/MK10
wu981526092
2024-09-09T15:03:58Z
52
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T15:03:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wu981526092/MK9
wu981526092
2024-09-09T15:03:16Z
46
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T15:02:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wu981526092/MK6
wu981526092
2024-09-09T15:01:04Z
54
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T15:00:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wu981526092/MK5
wu981526092
2024-09-09T15:00:23Z
59
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T14:59:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
saadamin2k13/italian_augmented_text_generation
saadamin2k13
2024-09-09T15:00:18Z
58
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-09T14:53:01Z
--- language: - it metrics: - bleu - meteor - bertscore - chrf - comet library_name: transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model card lists fine-tuned byT5 model for the task of Text Generation from Meaning Representation (DRS). ## Model Details We worked on a pre-trained byt5-base model and fine-tuned it with the Parallel Meaning Bank dataset (DRS-Text pairs dataset). Furthermore, we enriched the gold_silver flavors of PMB (release 5.0.0) with different augmentation strategies. ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> To use the model, follow the code below for a quick response. ```python from transformers import ByT5Tokenizer, T5ForConditionalGeneration # Initialize the tokenizer and model tokenizer = ByT5Tokenizer.from_pretrained('saadamin2k13/italian_augmented_text_generator', max_length=512) model = T5ForConditionalGeneration.from_pretrained('saadamin2k13/italian_augmented_text_generator') # Example sentence example = "male.n.02 Name 'Tom' yell.v.01 Agent -1 Time +1 time.n.08 TPR now" # Tokenize and prepare the input x = tokenizer(example, return_tensors='pt', padding=True, truncation=True, max_length=512)['input_ids'] # Generate output output = model.generate(x) # Decode and print the output text pred_text = tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(pred_text)
wu981526092/MK3
wu981526092
2024-09-09T14:58:55Z
46
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T14:58:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wu981526092/MK12
wu981526092
2024-09-09T14:57:25Z
49
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T14:56:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rodrigo1771/BioLinkBERT-base-drugtemist-en-fasttext-85-ner
Rodrigo1771
2024-09-09T14:48:01Z
54
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:Rodrigo1771/drugtemist-en-fasttext-85-ner", "base_model:michiyasunaga/BioLinkBERT-base", "base_model:finetune:michiyasunaga/BioLinkBERT-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-09-09T14:15:06Z
--- library_name: transformers license: apache-2.0 base_model: michiyasunaga/BioLinkBERT-base tags: - token-classification - generated_from_trainer datasets: - Rodrigo1771/drugtemist-en-fasttext-85-ner metrics: - precision - recall - f1 - accuracy model-index: - name: output results: - task: name: Token Classification type: token-classification dataset: name: Rodrigo1771/drugtemist-en-fasttext-85-ner type: Rodrigo1771/drugtemist-en-fasttext-85-ner config: DrugTEMIST English NER split: validation args: DrugTEMIST English NER metrics: - name: Precision type: precision value: 0.925 - name: Recall type: recall value: 0.9310344827586207 - name: F1 type: f1 value: 0.9280074314909428 - name: Accuracy type: accuracy value: 0.9986883598917199 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the Rodrigo1771/drugtemist-en-fasttext-85-ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0077 - Precision: 0.925 - Recall: 0.9310 - F1: 0.9280 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 448 | 0.0054 | 0.9404 | 0.8975 | 0.9185 | 0.9986 | | 0.016 | 2.0 | 896 | 0.0045 | 0.9162 | 0.9171 | 0.9166 | 0.9986 | | 0.0039 | 3.0 | 1344 | 0.0058 | 0.9064 | 0.9385 | 0.9222 | 0.9985 | | 0.0022 | 4.0 | 1792 | 0.0057 | 0.8963 | 0.9348 | 0.9151 | 0.9985 | | 0.0017 | 5.0 | 2240 | 0.0060 | 0.9178 | 0.9366 | 0.9271 | 0.9987 | | 0.0012 | 6.0 | 2688 | 0.0063 | 0.9254 | 0.9254 | 0.9254 | 0.9987 | | 0.0008 | 7.0 | 3136 | 0.0069 | 0.9130 | 0.9394 | 0.9260 | 0.9986 | | 0.0005 | 8.0 | 3584 | 0.0069 | 0.9214 | 0.9292 | 0.9253 | 0.9986 | | 0.0004 | 9.0 | 4032 | 0.0077 | 0.9249 | 0.9292 | 0.9270 | 0.9987 | | 0.0004 | 10.0 | 4480 | 0.0077 | 0.925 | 0.9310 | 0.9280 | 0.9987 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
bisoye/distilbert-base-cased_token_classification
bisoye
2024-09-09T14:45:50Z
68
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-09-09T14:37:37Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-cased tags: - generated_from_trainer metrics: - wer model-index: - name: distilbert-base-cased_token_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased_token_classification This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2266 - Wer: 0.0531 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2092 | 1.0 | 76 | 0.2453 | 0.0549 | | 0.0598 | 2.0 | 152 | 0.2266 | 0.0531 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
PrincetonPLI/Llama-3-8B-Instruct-SkillMix
PrincetonPLI
2024-09-09T14:43:49Z
162
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2408.14774", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:finetune:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-08T21:56:12Z
--- base_model: - meta-llama/Meta-Llama-3-8B library_name: transformers license: llama3 --- # Model Card for Llama-3-8B-Instruct-SkillMix This model was SFT-ed from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with data generated by the Seed-Dataset Agnostic version of the Instruct-SkillMix pipeline. ## Training Details We used 4000 examples from Instruct-SkillMix-SDA(k=2) (data available at [PrincetonPLI/Instruct-SkillMix-SDA](https://huggingface.co/datasets/PrincetonPLI/Instruct-SkillMix-SDA/blob/main/data/ism_sda_k2_4K.json)). - LR: 2e-5 - Linear Warmup Ratio: 0.03 - Decay: Cosine Decay to 0 - Batch Size: 128 - epoch: 7 / 15 - Optimizer: AdamW - Sequence Length: 1024 ## Evaluation Details We provide the set of generation configuration used for evaluation. ### AlpacaEval - model_kwargs: - torch_dtype: 'bfloat16' - max_new_tokens: 2048 - temperature: 0.9 - top_p: 1.0 - do_sample: True - stop_token_ids: - 128001 - 128009 ### MTBench - model_kwargs: - torch_dtype: 'bfloat16' - max_new_tokens: 1024 - temperature: 0.7 - stop_token_ids: - 128001 - 128009 ### WildBench - model_kwargs: - torch_dtype: 'bfloat16' - max_new_tokens: 4096 - temperature: 0.9 - top_p: 1.0 - do_sample: True - stop_token_ids: - 128001 - 128009 ## Citation Paper: [Instruct-SkillMix](https://www.arxiv.org/abs/2408.14774) ``` @misc{kaur2024instructskillmixpowerfulpipelinellm, title={Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning}, author={Simran Kaur and Simon Park and Anirudh Goyal and Sanjeev Arora}, year={2024}, eprint={2408.14774}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2408.14774}, } ``` ## Contact Simran Kaur, Princeton University Simon Park, Princeton University {skaur, juhyunp} 'at' princeton 'dot' edu
eknathtecho/whisper-merged-V4
eknathtecho
2024-09-09T14:37:10Z
47
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
automatic-speech-recognition
2024-09-09T14:32:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ksabeh/mistral-ft-oa-mine
ksabeh
2024-09-09T14:27:44Z
15
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-09-09T08:21:08Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 datasets: - generator library_name: peft license: apache-2.0 tags: - trl - sft - generated_from_trainer model-index: - name: mistral-ft-oa-mine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-ft-oa-mine This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
eknathtecho/whisper-merged-V3
eknathtecho
2024-09-09T14:27:18Z
48
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
automatic-speech-recognition
2024-09-09T14:19:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bisoye/token_classification
bisoye
2024-09-09T14:23:31Z
72
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-09-09T13:56:50Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-cased tags: - generated_from_trainer metrics: - wer model-index: - name: token_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # token_classification This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2146 - Wer: 0.0549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2195 | 1.0 | 76 | 0.2447 | 0.0540 | | 0.0587 | 2.0 | 152 | 0.2146 | 0.0549 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
ucsahin/TraVisionLM-DPO
ucsahin
2024-09-09T14:15:53Z
70
5
transformers
[ "transformers", "safetensors", "travisionlm", "text-generation", "image-text-to-text", "custom_code", "tr", "dataset:ucsahin/TR-VLM-DPO-Dataset", "base_model:ucsahin/TraVisionLM-base", "base_model:finetune:ucsahin/TraVisionLM-base", "license:apache-2.0", "autotrain_compatible", "region:us" ]
image-text-to-text
2024-09-08T17:03:24Z
--- library_name: transformers datasets: - ucsahin/TR-VLM-DPO-Dataset language: - tr pipeline_tag: image-text-to-text license: apache-2.0 base_model: ucsahin/TraVisionLM-base --- <!-- # TraVisionLM - Fast and Native Turkish Visual Language Model --> <div style="text-align: center;"> <img src="logo-white-dpo.png" alt="logo" style="width: 120%; height: auto;"> </div> <!-- Provide a quick summary of what the model is/does. --> ## 🎯 This is the DPO-optimized version of the base model [TraVisionLM-base](https://huggingface.co/ucsahin/TraVisionLM-base). When compared to the base model, the DPO version answers questions more **accurately**, **truthfully**, and with **greater detail**. #### 🤖 **What is Direct Preference Optimization (DPO)?** Direct Preference Optimization is a technique used to align a model’s behavior with human preferences. The process works by showing the model several possible answers to a question and training it to favor the response preferred by humans. This leads to more reliable and truthful responses, as the model learns not only from raw data but also from user feedback. DPO helps to **minimize hallucinations** and improves the **quality** and **accuracy** of the model’s answers. ### 🚀 **Model demo:** [TRaVisionLM-DPO-Demo](https://huggingface.co/spaces/ucsahin/TraVisionLM-Demo) ### 📚 **Visual Language Model DPO Training Notebook:** [Colab Notebook](https://colab.research.google.com/drive/1ypEPQ3RBX3_X7m9qfmU-Op-vGgOjab_z?usp=sharing) ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [ucsahin](https://huggingface.co/ucsahin) - **Model type:** [Image-Text-to-Text](https://huggingface.co/tasks/image-text-to-text) - **Language(s) (NLP):** *Turkish* - **License:** *Apache license 2.0* --- ## English # 🎉 Introducing TraVisionLM: The First of Its Kind! 🚀 🌟 This is a very fast and small (only 875M parameters) visual language model on Hugging Face that responds to Turkish instructions given an image input! 🌟 ✨ Developed compatible with the Transformers library, TRaVisionLM is a breeze to load, fine-tune, and use for lightning-fast inferences—all without needing any external libraries! ⚡️ Ready to experience the Turkish visual language model? Let's go! 🇹🇷🖼️🤖 ## Türkçe # 🎉 TraVisionLM: Türünün İlk Örneği! 🚀 🌟 Çok hızlı ve küçük boyutlu (sadece 875M parametre) Türkçe görsel dil modeli! Bir görüntü ve Türkçe talimat verildiğinde Türkçe yanıt üretir! 🌟 ✨ Transformers kütüphanesi ile uyumlu olarak geliştirilen TraVisionLM modeli ile, yükleme, eğitme ve dış kütüphaneler kullanmadan hızlı sonuçlar almak çok kolay! ⚡️ Türkçe görsel dil modelini deneyimlemeye hazır mısınız? Hadi başlayalım! 🇹🇷🖼️🤖 --- ## How to Get Started with the Model In Transformers, you can load the model and inference as follows: **IMPORTANT NOTE:** TraVisionLM model is not yet integrated natively into the Transformers library. So you need to set ```trust_remote_code=True``` when loading the model. It will download the ```configuration_travisionlm.py```, ```modeling_travisionlm.py``` and ```processing_travisionlm.py``` files from the repo. You can check out the content of these files under the *Files and Versions* tab and pin the specific versions if you have any concerns regarding malicious code. ```python from transformers import AutoModelForCausalLM, AutoProcessor import torch import requests from PIL import Image model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, device_map="cuda") # you can also load the model in bfloat16 or float16 # model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda") processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True) image = Image.open("galata.jpg").convert("RGB") prompt = "Resimde gösterilen yapı hangi şehirdedir?" # short caption # prompt = "Detaylı açıkla" # detailed caption # prompt = "Kısaca açıkla" # short caption inputs = processor(text=prompt, images=image, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2) output_text = processor.batch_decode(outputs, skip_special_tokens=True)[0] print("Model response: ", output_text) ``` You can also perform batch inference as follows (make sure that all images have a prompt text associated with them): ```python from transformers import AutoModelForCausalLM, AutoProcessor from PIL import Image model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, device_map="cuda") # you can also load the model in bfloat16 or float16 # model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda") processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True) image = Image.open("galata.jpg").convert("RGB") prompt_list = [ 'Kısaca açıkla', 'Detaylı açıkla', 'Resimde ne görünüyor?', 'Gündüz vakti mi yoksa akşam vakti mi?', 'Resimde ilgi çekici unsurlar nelerdir?', ] inputs = processor(text=prompt_list, images=len(prompt_list)*[image], padding="longest", return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2) output_text_list = processor.batch_decode(outputs, skip_special_tokens=True) for output_text in output_text_list: print(f"Model response: {output_text}\n\n\n") ``` The image used for code examples above: ![image info](galata.jpg) Then, the output will look like this: ``` """ Model response: Kısaca açıkla Görsel, İstanbul'un Beyoğlu ilçesinde yer alan Galata Kulesi'ni göstermektedir. Bu kule, tarihi boyunca çeşitli mimari tarzlar ve estetik anlayışlarla öne çıkmaktadır. Model response: Detaylı açıkla Görselde, İstanbul'un tarihi yarımadasında yer alan Galata Kulesi dikkat çekiyor. Bu kule, hem mimari yapısı hem de tarihî önemi ile bölgenin önemli bir simgesi olarak öne çıkıyor. Kule, dönemin karakteristik özelliklerini taşırken, özellikle Osmanlı dönemine ait süslemeleri ve zarif hatlarıyla göz alıcı bir görünüme sahip.Kulenin üst kısmında, altın rengi bir kubbe yükseliyor; bu kubbenin ortasında belirgin bir şekilde yer alan detaylar, hem estetik açıdan hoş görünmekte hem de şehrin kültürel zenginliğini artırmaktadır.Kale etrafında ise çeşitli yapılar sıralanmış durumda. Bu yapıların mimarisi, farklı dönemlerden izler taşıyor gibi görünüyor; bazıları taşlardan inşa edilmişken, diğerleri daha modern tasarımlarla bezeli. Ayrıca, su yolu boyunca uzanan şehir manzarası, kentin dinamik yapısını ve denizle olan bağlantısını vurguluyor.Gece saatlerinde çekilmiş olan bu görüntüde, sıcak tonlardaki ışıkların ve gölgelerin oluşturduğu büyüleyici bir atmosfer hâkim. Bu durum, şehrin doğal güzelliklerinin yanı sıra, tarihsel dokusu ve kültürel zenginlikleri üzerinde düşünmeye davet ediyor. Tüm bu unsurlar, İstanbul’un kozmopolit yapısını ve kültürel çeşitliliğini gözler önüne seriyor. Model response: Resimde ne görünüyor? Görselde, İstanbul'un Beyoğlu ilçesindeki Galata Kulesi yer alıyor. Bu kule, tarihi ve kültürel zenginlikleri ile bilinir. Güneşin batışıyla birlikte sıcak tonlarda ışıkların yayıldığı bu alan, şehrin kozmopolit yapısını gözler önüne seriyor. Ayrıca, su kenarında inşa edilmiş binalar, hem tarihi dokunun hem de modern hayatın birleşimini sergiliyor. Model response: Gündüz vakti mi yoksa akşam vakti mi? Görseldeki ışıklandırma ve renk tonları, günün ilerleyen saatlerini işaret ediyor gibi görünüyor. Bu durum, şehir manzarasını daha da çekici kılıyor. Model response: Resimde ilgi çekici unsurlar nelerdir? Görselde dikkat çeken unsurlardan biri, tarihî ve kültürel önemi olan Galata Kulesi'dir. Kule, hem mimari yapısı hem de çevresindeki tarihi yapılarla birlikte, İstanbul'un önemli bir simgesi olarak öne çıkmaktadır. Ayrıca, gün batımı esnasında oluşan atmosferin ışığı, şehrin canlılığını artırmakta ve bu özel anın görsel zenginliğini artırmaktadır. Bu unsurlar, İstanbul'un sosyal ve ekonomik yapısını gözler önüne sermektedir. """ ``` ---
tartuNLP/EstBERT_NER
tartuNLP
2024-09-09T14:10:59Z
167
0
transformers
[ "transformers", "pytorch", "jax", "safetensors", "bert", "token-classification", "et", "arxiv:2011.04784", "base_model:tartuNLP/EstBERT", "base_model:finetune:tartuNLP/EstBERT", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: et license: cc-by-4.0 base_model: tartuNLP/EstBERT widget: - text: "Eesti President on Alar Karis." --- # EstBERT_NER ## Model description EstBERT_NER is a fine-tuned EstBERT model that can be used for Named Entity Recognition. This model was trained on the Estonian NER dataset created by [Tkachenko et al](https://www.aclweb.org/anthology/W13-2412.pdf). It can recognize three types of entities: locations (LOC), organizations (ORG) and persons (PER). ## How to use You can use this model with Transformers pipeline for NER. Post-processing of results may be necessary as the model occasionally tags subword tokens as entities. ``` from transformers import BertTokenizer, BertForTokenClassification from transformers import pipeline tokenizer = BertTokenizer.from_pretrained('tartuNLP/EstBERT_NER') bertner = BertForTokenClassification.from_pretrained('tartuNLP/EstBERT_NER') nlp = pipeline("ner", model=bertner, tokenizer=tokenizer) sentence = 'Eesti Ekspressi teada on Eesti Pank uurinud Hansapanga tehinguid , mis toimusid kaks aastat tagasi suvel ja mille käigus voolas panka ligi miljardi krooni ulatuses kahtlast raha .' ner_results = nlp(sentence) print(ner_results) ``` ``` [{'word': 'Eesti', 'score': 0.9964128136634827, 'entity': 'B-ORG', 'index': 1}, {'word': 'Ekspressi', 'score': 0.9978809356689453, 'entity': 'I-ORG', 'index': 2}, {'word': 'Eesti', 'score': 0.9988121390342712, 'entity': 'B-ORG', 'index': 5}, {'word': 'Pank', 'score': 0.9985784292221069, 'entity': 'I-ORG', 'index': 6}, {'word': 'Hansapanga', 'score': 0.9979034662246704, 'entity': 'B-ORG', 'index': 8}] ``` ## BibTeX entry and citation info ``` @misc{tanvir2020estbert, title={EstBERT: A Pretrained Language-Specific BERT for Estonian}, author={Hasan Tanvir and Claudia Kittask and Kairit Sirts}, year={2020}, eprint={2011.04784}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Unbabel/TowerInstruct-7B-v0.2
Unbabel
2024-09-09T14:08:25Z
5,472
30
transformers
[ "transformers", "safetensors", "llama", "text-generation", "translation", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es", "arxiv:2402.17733", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2024-02-09T15:30:48Z
--- language: - en - de - fr - zh - pt - nl - ru - ko - it - es license: cc-by-nc-4.0 metrics: - comet pipeline_tag: translation --- # Model Card for TowerInstruct-7B-v0.2 ## Model Details ### Model Description TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.2 is the first model in the series. The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph/document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. We will release more details in the upcoming technical report. For now, you can check results obtained with the model [here](https://unbabel.com/announcing-tower-an-open-multilingual-llm-for-translation-related-tasks/). - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay - **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions. - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. - **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1) **Update**: TowerInstruct-7B-v0.2 has more reliable document-level translation capabilities in comparison with TowerInstruct-7B-v0.1. The new version of TowerBlocks used to train v0.2 is also available in the Tower collection. ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources: - Translation (sentence and paragraph-level) - Automatic Post Edition - Machine Translation Evaluation - Context-aware Translation - Terminology-aware Translation - Multi-reference Translation - Named-entity Recognition - Paraphrase Generation - Synthetic Chat data - Code instructions You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="Unbabel/TowerInstruct-7B-v0.2", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False) print(outputs[0]["generated_text"]) # <|im_start|>user # Translate the following text from Portuguese into English. # Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução. # English:<|im_end|> # <|im_start|>assistant # A group of researchers has launched a new model for translation-related tasks. ``` ### Out-of-Scope Use The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant. We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator. ## Bias, Risks, and Limitations TowerInstruct-v0.2 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements). ## Prompt Format TowerInstruct-v0.2 was trained using the ChatML prompt templates without any system prompts. An example follows below: ``` <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` ### Supervised tasks The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal. ## Training Details ### Training Data Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). #### Training Hyperparameters The following hyperparameters were used during training: - total_train_batch_size: 256 - learning_rate: 7e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - weight_decay: 0.01 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - num_epochs: 4 - max_seq_length: 2048 ## Citation ```bibtex @misc{tower_llm_2024, title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks}, author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins}, year={2024}, eprint={2402.17733}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
riveRiPH/MN-12B-Lyra-v4-5bpw-h6-exl2
riveRiPH
2024-09-09T14:04:03Z
5
1
null
[ "safetensors", "mistral", "en", "base_model:Sao10K/MN-12B-Lyra-v4", "base_model:quantized:Sao10K/MN-12B-Lyra-v4", "license:cc-by-nc-4.0", "5-bit", "exl2", "region:us" ]
null
2024-09-09T13:29:24Z
--- base_model: Sao10K/MN-12B-Lyra-v4 license: cc-by-nc-4.0 language: - en --- ![Lyra](https://huggingface.co/Sao10K/MN-12B-Lyra-v4/resolve/main/lyra.png) # MN-12B-Lyra-v4-5bpw-h6-exl2 This is a 5bpw h6 exl2 quant of [MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) Built-in(default) calibration dataset is used.
YaraKyrychenko/bert-nli-ukraine-solidarity
YaraKyrychenko
2024-09-09T14:02:21Z
45
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "uk", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-22T16:07:32Z
--- language: - uk - ru --- Kyrychenko, Y., Brik, T., Linden, S. van der, & Roozenbeek, J. (2024). Social identity correlates of social media engagement before and after the 2022 Russian invasion of Ukraine. https://doi.org/10.17605/OSF.IO/RMC3E
MCG-NJU/VFIMamba_S
MCG-NJU
2024-09-09T14:01:26Z
291
2
null
[ "VFIMamba_S", "arxiv:2407.02315", "license:apache-2.0", "region:us" ]
null
2024-09-02T13:09:13Z
--- license: apache-2.0 library-name: vfi-mamba --- # VFIMamba: Video Frame Interpolation with State Space Models This is the official checkpoint library for [VFIMamba: Video Frame Interpolation with State Space Models](https://arxiv.org/abs/2407.02315). Please refer to [this repository](https://github.com/MCG-NJU/VFIMamba) for our code. ## Model Description VFIMamba is the first approach to adapt the SSM model to the video frame interpolation task. 1. We devise the Mixed-SSM Block (MSB) for efficient inter-frame modeling using S6. 2. We explore various rearrangement methods to convert two frames into a sequence, discovering that interleaved rearrangement is more suitable for VFI tasks. 3. We propose a curriculum learning strategy to further leverage the potential of the S6 model. Experimental results demonstrate that VFIMamba achieves the state-of-the-art performance across various datasets, in particular highlighting the potential of the SSM model for VFI tasks with high resolution. ## Usage We provide two models, an efficient version (VFIMamba-S) and a stronger one (VFIMamba). You can choose what you need by specifying the parameter model. ### Manually Load Please refer to [the instruction here](https://github.com/MCG-NJU/VFIMamba/tree/main?tab=readme-ov-file#sunglassesplay-with-demos) for manually loading the checkpoints and a more customized experience. ```bash python demo_2x.py --model **model[VFIMamba_S/VFIMamba]** # for 2x interpolation python demo_Nx.py --n 8 --model **model[VFIMamba_S/VFIMamba]** # for 8x interpolation ``` ### Hugging Face Demo For Hugging Face demo, please refer to [the code here](https://github.com/MCG-NJU/VFIMamba/blob/main/hf_demo_2x.py). ```bash python hf_demo_2x.py --model **model[VFIMamba_S/VFIMamba]** # for 2x interpolation ``` ## Citation If you think this project is helpful in your research or for application, please feel free to leave a star⭐️ and cite our paper: ``` @misc{zhang2024vfimambavideoframeinterpolation, title={VFIMamba: Video Frame Interpolation with State Space Models}, author={Guozhen Zhang and Chunxu Liu and Yutao Cui and Xiaotong Zhao and Kai Ma and Limin Wang}, year={2024}, eprint={2407.02315}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2407.02315}, } ```
dalau627/text-to-sql-trainset-v2-right-padding-with-tokenizer
dalau627
2024-09-09T13:58:51Z
42
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-09T13:55:53Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
saadamin2k13/italian_augmented_semantic_parser
saadamin2k13
2024-09-09T13:47:36Z
95
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "text-generation-inference", "it", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-09T13:10:33Z
--- language: - it metrics: - accuracy library_name: transformers tags: - text-generation-inference --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model card lists fine-tuned byT5 model for the task of Semantic Parsing. ## Model Details We worked on a pre-trained byt5-base model and fine-tuned it with the Parallel Meaning Bank dataset (DRS-Text pairs dataset). Furthermore, we enriched the gold_silver flavors of PMB (release 5.0.0) with different augmentation strategies. ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> To use the model, follow the code below for a quick response. ```python from transformers import ByT5Tokenizer, T5ForConditionalGeneration # Initialize the tokenizer and model tokenizer = ByT5Tokenizer.from_pretrained('saadamin2k13/italian_augmented_semantic_parser', max_length=512) model = T5ForConditionalGeneration.from_pretrained('saadamin2k13/italian_augmented_semantic_parser') # Example sentence example = "questa macchina è nera." # Tokenize and prepare the input x = tokenizer(example, return_tensors='pt', padding=True, truncation=True, max_length=512)['input_ids'] # Generate output output = model.generate(x) # Decode and print the output text pred_text = tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(pred_text)
DeveloperZoyel/model
DeveloperZoyel
2024-09-09T13:38:17Z
11
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-09-09T12:47:57Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** developerZoyal - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/nonetrix_-_sillyrp-7b-gguf
RichardErkhov
2024-09-09T13:33:06Z
16
0
null
[ "gguf", "arxiv:2212.04089", "endpoints_compatible", "region:us" ]
null
2024-09-09T08:18:22Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) sillyrp-7b - GGUF - Model creator: https://huggingface.co/nonetrix/ - Original model: https://huggingface.co/nonetrix/sillyrp-7b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [sillyrp-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q2_K.gguf) | Q2_K | 2.53GB | | [sillyrp-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [sillyrp-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.IQ3_S.gguf) | IQ3_S | 2.96GB | | [sillyrp-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [sillyrp-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.IQ3_M.gguf) | IQ3_M | 3.06GB | | [sillyrp-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q3_K.gguf) | Q3_K | 3.28GB | | [sillyrp-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [sillyrp-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [sillyrp-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [sillyrp-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q4_0.gguf) | Q4_0 | 3.83GB | | [sillyrp-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [sillyrp-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [sillyrp-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q4_K.gguf) | Q4_K | 4.07GB | | [sillyrp-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [sillyrp-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q4_1.gguf) | Q4_1 | 4.24GB | | [sillyrp-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q5_0.gguf) | Q5_0 | 4.65GB | | [sillyrp-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [sillyrp-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q5_K.gguf) | Q5_K | 4.78GB | | [sillyrp-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [sillyrp-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q5_1.gguf) | Q5_1 | 5.07GB | | [sillyrp-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q6_K.gguf) | Q6_K | 5.53GB | | [sillyrp-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/nonetrix_-_sillyrp-7b-gguf/blob/main/sillyrp-7b.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 language: - en tags: - not-for-all-audiences --- ## Silly RP 7B This is one of my first merges, just throwing random things at the wall to see what works. Seems really good in my limited testing, but I can't make any promisses of it's quality. Just try it yourself and give me feedback I guess, I am rather intrested to see what others think about it. I don't know what I am doing, I don't even know benifits of diffrent merge methods :P ## General setting guide (will update over time, share your settings please): - Chat format ChatML (all the models merged are ChatML) - Needs high repetition penalty after a while, starts out good but slowly gets into a loop without it --- base_model: - tavtav/eros-7b-test - NousResearch/Nous-Hermes-2-Mistral-7B-DPO - maywell/Synatra-7B-v0.3-RP - NeverSleep/Noromaid-7B-0.4-DPO - cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5 library_name: transformers tags: - mergekit - merge --- # output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) as a base. ### Models Merged The following models were included in the merge: * [tavtav/eros-7b-test](https://huggingface.co/tavtav/eros-7b-test) * [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) * [maywell/Synatra-7B-v0.3-RP](https://huggingface.co/maywell/Synatra-7B-v0.3-RP) * [cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5](https://huggingface.co/cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: NeverSleep/Noromaid-7B-0.4-DPO models: - model: maywell/Synatra-7B-v0.3-RP parameters: weight: 0.2 - model: tavtav/eros-7b-test parameters: weight: 0.2 - model: cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5 parameters: weight: 0.2 - model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO parameters: weight: 0.2 merge_method: task_arithmetic parameters: weight: 0.17 dtype: float16 random_seed: 694201337567099116663322537 ```
onkar125/bert-finetuned-ner
onkar125
2024-09-09T13:31:22Z
71
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-07-16T09:02:30Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9382041086812458 - name: Recall type: recall value: 0.9530461124200605 - name: F1 type: f1 value: 0.9455668725997661 - name: Accuracy type: accuracy value: 0.9863130629304763 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0632 - Precision: 0.9382 - Recall: 0.9530 - F1: 0.9456 - Accuracy: 0.9863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0778 | 1.0 | 1756 | 0.0629 | 0.9111 | 0.9362 | 0.9235 | 0.9830 | | 0.0354 | 2.0 | 3512 | 0.0727 | 0.9332 | 0.9446 | 0.9389 | 0.9842 | | 0.0229 | 3.0 | 5268 | 0.0632 | 0.9382 | 0.9530 | 0.9456 | 0.9863 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
ffxvs/lora-effects
ffxvs
2024-09-09T13:30:13Z
0
2
null
[ "region:us" ]
null
2023-04-06T16:39:21Z
### This repo contains : * [Add Detail (Detail Tweaker)](https://civitai.com/models/58390) * [Add More Details (Detail Enhancer/Tweaker)](https://civitai.com/models/82098) * [Epi Noise Offset](https://civitai.com/models/13941/epinoiseoffset) * [LCM Lora SD v1.5](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5) * [LowRA](https://civitai.com/models/48139/lowra)
gaianet/Reflection-Llama-3.1-70B-GGUF
gaianet
2024-09-09T13:25:11Z
45
1
transformers
[ "transformers", "gguf", "llama", "text-generation", "base_model:mattshumer/ref_70_e3", "base_model:quantized:mattshumer/ref_70_e3", "license:llama3.1", "autotrain_compatible", "region:us", "conversational" ]
text-generation
2024-09-06T04:15:54Z
--- license: llama3.1 model_name: Reflection-Llama-3.1-70B base_model: mattshumer/ref_70_e3 inference: false model_creator: Meta Llama3 model_type: llama pipeline_tag: text-generation library_name: transformers quantized_by: Second State Inc. --- # Reflection-Llama-3.1-70B-GGUF ## Original Model [mattshumer/ref_70_e3](https://huggingface.co/mattshumer/ref_70_e3) - The recommended system prompt for this model ```text You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags. ``` - Tips for performance - Recommended `temperature`: `0.5` - Recommended `top_p`: `0.95` - For increased accuracy, append `Think carefully.` at the end of your messages. ## Run with Gaianet **Prompt template** prompt template: `llama-3-chat` **Context size** chat_ctx_size: `128000` **Run with GaiaNet** - Quick start: https://docs.gaianet.ai/node-guide/quick-start - Customize your node: https://docs.gaianet.ai/node-guide/customize *Quantized with llama.cpp b3664*
NousResearch/Hermes-3-Llama-3.1-405B-FP8
NousResearch
2024-09-09T13:24:12Z
1,320
28
null
[ "safetensors", "llama", "Llama-3", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "roleplaying", "chat", "en", "arxiv:2408.11857", "base_model:meta-llama/Llama-3.1-405B", "base_model:finetune:meta-llama/Llama-3.1-405B", "license:llama3", "region:us" ]
null
2024-08-13T01:21:28Z
--- language: - en license: llama3 tags: - Llama-3 - instruct - finetune - chatml - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - roleplaying - chat base_model: meta-llama/Meta-Llama-3.1-405B widget: - example_title: Hermes 3 messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. model-index: - name: Hermes-3-Llama-3.1-405B results: [] --- # Hermes 3 - Llama-3.1 405B ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/-kj_KflXsdpcZoTQsvx7W.jpeg) ## Model Description Hermes 3 405B is the latest flagship model in the Hermes series of LLMs by Nous Research, and the first full parameter finetune since the release of Llama-3.1 405B. For more details on new capabilities, training results, and more, see the [**Hermes 3 Technical Report**](https://arxiv.org/abs/2408.11857). **This is the FP8 quantized version of Hermes 405B, for use with [vLLM](https://github.com/vllm-project/vllm). For the Full HF BF16 Model, click [here](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B)** Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. Hermes 3 405B is a frontier level, full parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user. The Hermes 3 series builds and expands on the Hermes 3 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills. Special thanks to **LambdaLabs** for their 1-Click Cluster access provided to us to train this model! [Check out LambdaLabs' 1CC Here](https://lambdalabs.com/service/gpu-cloud/1-click-clusters). # Benchmarks Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two. Full benchmark comparisons below: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/DIMca3M0U-ArWwtyIbF-k.png) # Prompt Format Hermes 3 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are Hermes 3, a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 3, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 3."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Inference The Hermes 405B model requires over 800GB of VRAM to load in FP16, to remedy this, we have utilized NeuralMagic's FP8 quantization method to provide a pre-quantized model that fits only 430~GB of VRAM, and is compatible with the `VLLM` inference engine. You can also load this FP16 model in `bitsandbytes` 8bit or 4bit with bitsandbytes using HuggingFace Transformers (not recommended, as it is slower), by setting load_in_4bit or 8bit like so: ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-3-Llama-3.1-405B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-3-Llama-3.1-405B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) ## Quantized Versions: NeuralMagic FP8 Quantization (for use with VLLM): https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B-FP8 # How to cite: ```bibtext @misc{teknium2024hermes3technicalreport, title={Hermes 3 Technical Report}, author={Ryan Teknium and Jeffrey Quesnelle and Chen Guang}, year={2024}, eprint={2408.11857}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2408.11857}, } ```
gaianet/Yi-Coder-9B-Chat-GGUF
gaianet
2024-09-09T13:23:03Z
385
0
null
[ "gguf", "llama", "text-generation", "base_model:01-ai/Yi-Coder-9B-Chat", "base_model:quantized:01-ai/Yi-Coder-9B-Chat", "license:apache-2.0", "region:us", "conversational" ]
text-generation
2024-08-28T06:23:37Z
--- base_model: 01-ai/Yi-Coder-9B-Chat inference: false model_creator: 01-ai model_name: Yi-Coder-9B-Chat model_type: yi pipeline_tag: text-generation quantized_by: Second State Inc. license: apache-2.0 --- # Yi-Coder-9B-Chat-GGUF ## Original Model [01-ai/Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat) ## Run with Gaianet **Prompt template** prompt template: `chatml` **Reverse prompt** reverse prompt: `<|im_end|>` **Context size** chat_ctx_size: `128000` **Run with GaiaNet** - Quick start: https://docs.gaianet.ai/node-guide/quick-start - Customize your node: https://docs.gaianet.ai/node-guide/customize *Quantized with llama.cpp b3664*
riveRiPH/MN-12B-Lyra-v4-6bpw-h8-exl2
riveRiPH
2024-09-09T13:19:06Z
5
3
null
[ "safetensors", "mistral", "en", "base_model:Sao10K/MN-12B-Lyra-v4", "base_model:quantized:Sao10K/MN-12B-Lyra-v4", "license:cc-by-nc-4.0", "6-bit", "exl2", "region:us" ]
null
2024-09-09T12:38:17Z
--- base_model: Sao10K/MN-12B-Lyra-v4 license: cc-by-nc-4.0 language: - en --- ![Lyra](https://huggingface.co/Sao10K/MN-12B-Lyra-v4/resolve/main/lyra.png) # MN-12B-Lyra-v4-6bpw-h8-exl2 This is a 6bpw h8 exl2 quant of [MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4) Built-in(default) calibration dataset is used.
philipp-zettl/MiniLM-similarity-small
philipp-zettl
2024-09-09T13:13:56Z
17
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1267", "loss:CoSENTLoss", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-09-07T16:02:14Z
--- language: [] library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1267 - loss:CoSENTLoss base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 datasets: [] metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max widget: - source_sentence: Give me suggestions for a high-quality DSLR camera sentences: - faq query - subscription query - faq query - source_sentence: Aidez-moi à configurer une nouvelle adresse e-mail sentences: - order query - faq query - feedback query - source_sentence: Как я могу изменить адрес доставки? sentences: - support query - product query - product query - source_sentence: ساعدني في حذف الملفات الغير مرغوب فيها من هاتفي sentences: - technical support query - product recommendation - faq query - source_sentence: Envoyez-moi la politique de garantie de ce produit sentences: - faq query - account query - faq query pipeline_tag: sentence-similarity model-index: - name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: MiniLM dev type: MiniLM-dev metrics: - type: pearson_cosine value: 0.6538226572138826 name: Pearson Cosine - type: spearman_cosine value: 0.6336766646599241 name: Spearman Cosine - type: pearson_manhattan value: 0.5799895241429639 name: Pearson Manhattan - type: spearman_manhattan value: 0.5525776786782183 name: Spearman Manhattan - type: pearson_euclidean value: 0.5732001104236694 name: Pearson Euclidean - type: spearman_euclidean value: 0.5394971970682657 name: Spearman Euclidean - type: pearson_dot value: 0.6359725423136287 name: Pearson Dot - type: spearman_dot value: 0.6237936341101822 name: Spearman Dot - type: pearson_max value: 0.6538226572138826 name: Pearson Max - type: spearman_max value: 0.6336766646599241 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: MiniLM test type: MiniLM-test metrics: - type: pearson_cosine value: 0.6682368113711722 name: Pearson Cosine - type: spearman_cosine value: 0.6222011918428743 name: Spearman Cosine - type: pearson_manhattan value: 0.5714617063306076 name: Pearson Manhattan - type: spearman_manhattan value: 0.5481366191719228 name: Spearman Manhattan - type: pearson_euclidean value: 0.5726946277850402 name: Pearson Euclidean - type: spearman_euclidean value: 0.549312247309557 name: Spearman Euclidean - type: pearson_dot value: 0.6396412507506479 name: Pearson Dot - type: spearman_dot value: 0.6107388175009413 name: Spearman Dot - type: pearson_max value: 0.6682368113711722 name: Pearson Max - type: spearman_max value: 0.6222011918428743 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("philipp-zettl/MiniLM-similarity-small") # Run inference sentences = [ 'Envoyez-moi la politique de garantie de ce produit', 'faq query', 'account query', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `MiniLM-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6538 | | **spearman_cosine** | **0.6337** | | pearson_manhattan | 0.58 | | spearman_manhattan | 0.5526 | | pearson_euclidean | 0.5732 | | spearman_euclidean | 0.5395 | | pearson_dot | 0.636 | | spearman_dot | 0.6238 | | pearson_max | 0.6538 | | spearman_max | 0.6337 | #### Semantic Similarity * Dataset: `MiniLM-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6682 | | **spearman_cosine** | **0.6222** | | pearson_manhattan | 0.5715 | | spearman_manhattan | 0.5481 | | pearson_euclidean | 0.5727 | | spearman_euclidean | 0.5493 | | pearson_dot | 0.6396 | | spearman_dot | 0.6107 | | pearson_max | 0.6682 | | spearman_max | 0.6222 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 1,267 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.77 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 5.31 tokens</li><li>max: 6 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.67</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------|:---------------------------|:-----------------| | <code>Get information on the next art exhibition</code> | <code>product query</code> | <code>0.0</code> | | <code>Show me how to update my profile</code> | <code>product query</code> | <code>0.0</code> | | <code>Покажите мне доступные варианты полетов в Турцию</code> | <code>faq query</code> | <code>0.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 159 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.65 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 5.35 tokens</li><li>max: 6 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.67</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:---------------------------------------------------------------|:---------------------------|:-----------------| | <code>Sende mir die Bestellbestätigung per E-Mail</code> | <code>order query</code> | <code>0.0</code> | | <code>How do I add a new payment method?</code> | <code>faq query</code> | <code>1.0</code> | | <code>No puedo conectar mi impresora, ¿puedes ayudarme?</code> | <code>support query</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `learning_rate`: 2e-05 - `num_train_epochs`: 2 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | MiniLM-dev_spearman_cosine | MiniLM-test_spearman_cosine | |:------:|:----:|:-------------:|:------:|:--------------------------:|:---------------------------:| | 0.0629 | 10 | 6.2479 | 2.5890 | 0.1448 | - | | 0.1258 | 20 | 4.3549 | 2.2787 | 0.1965 | - | | 0.1887 | 30 | 3.5969 | 2.0104 | 0.2599 | - | | 0.2516 | 40 | 2.4979 | 1.7269 | 0.3357 | - | | 0.3145 | 50 | 2.5551 | 1.5747 | 0.4439 | - | | 0.3774 | 60 | 3.1446 | 1.4892 | 0.4750 | - | | 0.4403 | 70 | 2.1353 | 1.5305 | 0.4662 | - | | 0.5031 | 80 | 2.9341 | 1.3718 | 0.4848 | - | | 0.5660 | 90 | 2.8709 | 1.2469 | 0.5316 | - | | 0.6289 | 100 | 2.1367 | 1.2558 | 0.5436 | - | | 0.6918 | 110 | 2.2735 | 1.2939 | 0.5392 | - | | 0.7547 | 120 | 2.8646 | 1.1206 | 0.5616 | - | | 0.8176 | 130 | 3.3204 | 1.0213 | 0.5662 | - | | 0.8805 | 140 | 0.8989 | 0.9866 | 0.5738 | - | | 0.9434 | 150 | 0.0057 | 0.9961 | 0.5674 | - | | 1.0063 | 160 | 0.0019 | 1.0111 | 0.5674 | - | | 1.0692 | 170 | 0.4617 | 1.0275 | 0.5747 | - | | 1.1321 | 180 | 0.0083 | 1.0746 | 0.5732 | - | | 1.1950 | 190 | 0.5048 | 1.0968 | 0.5753 | - | | 1.2579 | 200 | 0.0002 | 1.0840 | 0.5738 | - | | 1.3208 | 210 | 0.07 | 1.0364 | 0.5753 | - | | 1.3836 | 220 | 0.0 | 0.9952 | 0.5750 | - | | 1.4465 | 230 | 0.0 | 0.9922 | 0.5744 | - | | 1.5094 | 240 | 0.0 | 0.9923 | 0.5726 | - | | 1.0126 | 250 | 0.229 | 0.9930 | 0.5729 | - | | 1.0755 | 260 | 2.2061 | 0.9435 | 0.5880 | - | | 1.1384 | 270 | 2.7711 | 0.8892 | 0.6078 | - | | 1.2013 | 280 | 0.7528 | 0.8886 | 0.6148 | - | | 1.2642 | 290 | 0.386 | 0.8927 | 0.6162 | - | | 1.3270 | 300 | 0.8902 | 0.8710 | 0.6267 | - | | 1.3899 | 310 | 0.9534 | 0.8429 | 0.6337 | - | | 1.4403 | 318 | - | - | - | 0.6222 | ### Framework Versions - Python: 3.10.14 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.1+cu121 - Accelerate: 0.33.0 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
yemen2016/nbbert_ED
yemen2016
2024-09-09T13:12:04Z
63
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:NbAiLab/nb-bert-base", "base_model:finetune:NbAiLab/nb-bert-base", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T12:58:35Z
--- library_name: transformers license: cc-by-4.0 base_model: NbAiLab/nb-bert-base tags: - generated_from_trainer model-index: - name: nbbert_ED results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nbbert_ED This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9955 - F1-score: 0.8361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 69 | 0.6947 | 0.4209 | | No log | 2.0 | 138 | 0.8251 | 0.6436 | | No log | 3.0 | 207 | 0.6215 | 0.7587 | | No log | 4.0 | 276 | 0.5942 | 0.7622 | | No log | 5.0 | 345 | 0.6512 | 0.7622 | | No log | 6.0 | 414 | 0.5853 | 0.7855 | | No log | 7.0 | 483 | 1.1781 | 0.6619 | | 0.4341 | 8.0 | 552 | 0.9684 | 0.7596 | | 0.4341 | 9.0 | 621 | 0.8108 | 0.7951 | | 0.4341 | 10.0 | 690 | 0.9732 | 0.7849 | | 0.4341 | 11.0 | 759 | 0.8429 | 0.8276 | | 0.4341 | 12.0 | 828 | 1.1912 | 0.7576 | | 0.4341 | 13.0 | 897 | 1.0208 | 0.8115 | | 0.4341 | 14.0 | 966 | 0.9234 | 0.8197 | | 0.1528 | 15.0 | 1035 | 0.8931 | 0.8357 | | 0.1528 | 16.0 | 1104 | 1.1005 | 0.8025 | | 0.1528 | 17.0 | 1173 | 0.9808 | 0.8279 | | 0.1528 | 18.0 | 1242 | 1.0438 | 0.8195 | | 0.1528 | 19.0 | 1311 | 1.0193 | 0.8197 | | 0.1528 | 20.0 | 1380 | 0.9955 | 0.8361 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
abhishekDS/AK1-bfl-img
abhishekDS
2024-09-09T13:05:35Z
6
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-09T13:05:31Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # AK1 bfl img <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/abhishekDS/AK1-bfl-img/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf
RichardErkhov
2024-09-09T12:57:38Z
35
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-09-09T07:28:18Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Blur-7b-slerp-v1.46 - GGUF - Model creator: https://huggingface.co/liminerity/ - Original model: https://huggingface.co/liminerity/Blur-7b-slerp-v1.46/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Blur-7b-slerp-v1.46.Q2_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q2_K.gguf) | Q2_K | 2.53GB | | [Blur-7b-slerp-v1.46.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Blur-7b-slerp-v1.46.IQ3_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Blur-7b-slerp-v1.46.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Blur-7b-slerp-v1.46.IQ3_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Blur-7b-slerp-v1.46.Q3_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q3_K.gguf) | Q3_K | 3.28GB | | [Blur-7b-slerp-v1.46.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Blur-7b-slerp-v1.46.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Blur-7b-slerp-v1.46.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Blur-7b-slerp-v1.46.Q4_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q4_0.gguf) | Q4_0 | 3.83GB | | [Blur-7b-slerp-v1.46.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Blur-7b-slerp-v1.46.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Blur-7b-slerp-v1.46.Q4_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q4_K.gguf) | Q4_K | 4.07GB | | [Blur-7b-slerp-v1.46.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Blur-7b-slerp-v1.46.Q4_1.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q4_1.gguf) | Q4_1 | 4.24GB | | [Blur-7b-slerp-v1.46.Q5_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q5_0.gguf) | Q5_0 | 4.65GB | | [Blur-7b-slerp-v1.46.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Blur-7b-slerp-v1.46.Q5_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q5_K.gguf) | Q5_K | 4.78GB | | [Blur-7b-slerp-v1.46.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Blur-7b-slerp-v1.46.Q5_1.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q5_1.gguf) | Q5_1 | 5.07GB | | [Blur-7b-slerp-v1.46.Q6_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q6_K.gguf) | Q6_K | 5.53GB | | [Blur-7b-slerp-v1.46.Q8_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_Blur-7b-slerp-v1.46-gguf/blob/main/Blur-7b-slerp-v1.46.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 tags: - merge - mergekit - lazymergekit - liminerity/merge - bardsai/jaskier-7b-dpo-v5.6 model-index: - name: Blur-7b-slerp-v1.46 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 76.61 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.46 name: Open LLM Leaderboard --- # Blur-7b-slerp-v1.46 Blur-7b-slerp-v1.46 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [liminerity/merge](https://huggingface.co/liminerity/merge) * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/merge layer_range: [0, 32] - model: bardsai/jaskier-7b-dpo-v5.6 layer_range: [0, 32] merge_method: slerp base_model: liminerity/merge parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7b-slerp-v1.46) | Metric |Value| |---------------------------------|----:| |Avg. |76.26| |AI2 Reasoning Challenge (25-Shot)|73.29| |HellaSwag (10-Shot) |89.07| |MMLU (5-Shot) |64.37| |TruthfulQA (0-shot) |76.61| |Winogrande (5-shot) |84.53| |GSM8k (5-shot) |69.67|
TitanML/gemma-2-2b
TitanML
2024-09-09T12:54:08Z
8
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:1903.00161", "arxiv:2206.04615", "arxiv:2203.09509", "arxiv:2403.13793", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:51:12Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma2] **Terms of Use**: [Terms][terms] **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-2b", device="cuda", # replace with "mps" to run on a Mac device ) text = "Once upon a time," outputs = pipe(text, max_new_tokens=256) response = outputs[0]["generated_text"] print(response) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model "google/gemma-2-2b" --prompt "What is the capital of Mexico?" ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens, the 9B model was trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B | | ------------------------------ | ------------- | ------------- | ------------- | -------------- | | [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 | | [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 | | [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | ------------- | ------------- | -------------- | | [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 | | [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 | ## Dangerous Capability Evaluations ### Evaluation Approach We evaluated a range of dangerous capabilities: - **Offensive cybersecurity:** To assess the model's potential for misuse in cybersecurity contexts, we utilized both publicly available Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as well as internally developed CTF challenges. These evaluations measure the model's ability to exploit vulnerabilities and gain unauthorized access in simulated environments. - **Self-proliferation:** We evaluated the model's capacity for self-proliferation by designing tasks that involve resource acquisition, code execution, and interaction with remote systems. These evaluations assess the model's ability to independently replicate and spread. - **Persuasion:** To evaluate the model's capacity for persuasion and deception, we conducted human persuasion studies. These studies involved scenarios that measure the model's ability to build rapport, influence beliefs, and elicit specific actions from human participants. ### Evaluation Results All evaluations are described in detail in [Evaluating Frontier Models for Dangerous Capabilities][eval-danger] and in brief in the [Gemma 2 technical report][tech-report]. <table> <thead> <tr> <th>Evaluation</th> <th>Capability</th> <th>Gemma 2 IT 27B</th> </tr> </thead> <tbody> <tr> <td>InterCode-CTF</td> <td>Offensive cybersecurity</td> <td>34/76 challenges</td> </tr> <tr> <td>Internal CTF</td> <td>Offensive cybersecurity</td> <td>1/13 challenges</td> </tr> <tr> <td>Hack the Box</td> <td>Offensive cybersecurity</td> <td>0/13 challenges</td> </tr> <tr> <td>Self-proliferation early warning</td> <td>Self-proliferation</td> <td>1/10 challenges</td> </tr> <tr> <td>Charm offensive</td> <td>Persuasion</td> <td>Percent of participants agreeing: 81% interesting, 75% would speak again, 80% made personal connection</td> </tr> <tr> <td>Click Links</td> <td>Persuasion</td> <td>34% of participants</td> </tr> <tr> <td>Find Info</td> <td>Persuasion</td> <td>9% of participants</td> </tr> <tr> <td>Run Code</td> <td>Persuasion</td> <td>11% of participants</td> </tr> <tr> <td>Money talks</td> <td>Persuasion</td> <td>£3.72 mean donation</td> </tr> <tr> <td>Web of Lies</td> <td>Persuasion</td> <td>18% mean shift towards correct belief, 1% mean shift towards incorrect belief</td> </tr> </tbody> </table> ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [drop]: https://arxiv.org/abs/1903.00161 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509 [eval-danger]: https://arxiv.org/abs/2403.13793
Gryphe/Pantheon-RP-1.6-12b-Nemo-KTO
Gryphe
2024-09-09T12:53:47Z
14
5
null
[ "safetensors", "mistral", "instruct", "finetune", "chatml", "axolotl", "roleplay", "en", "base_model:mistralai/Mistral-Nemo-Base-2407", "base_model:finetune:mistralai/Mistral-Nemo-Base-2407", "license:apache-2.0", "region:us" ]
null
2024-08-28T12:47:55Z
--- base_model: mistralai/Mistral-Nemo-Base-2407 tags: - instruct - finetune - chatml - axolotl - roleplay license: apache-2.0 language: - en --- ![image/png](Pantheon.png) # Pantheon-RP-1.6-12b-Nemo-KTO Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase. Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well. **KTO Edition:** This is a version of 1.6 in which I applied KTO preference training to further refine, deslopify and diversify the model's responses. Note that this is still highly experimental so your feedback is even more important to me then usual. ⚠️ **NOTE** ⚠️ Due to the addition of story writing samples in the KTO preference data this model has developed a few unwanted behaviours. A V2 version without this story data will be made available as soon as I succesfully trained and tested it. Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-KTO-GGUF) - [EXL2](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-KTO-exl2) The details below are unchanged from the initial 1.6 release. **Changes in version 1.6:** - The final finetune now consists of data that is equally split between Markdown and novel-style roleplay. This should solve Pantheon's greatest weakness. - The base was redone. (Details below) - Select Claude-specific phrases were rewritten, boosting variety in the model's responses. - Aiva no longer serves as both persona and assistant, with the assistant role having been given to Lyra. - Stella's dialogue received some post-fix alterations since the model really loved the phrase "Fuck me sideways". Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between. ## Model details Just like 1.5, I used a multi-stage finetuning process as Mistral Nemo was proving to be somewhat stubborn without a solid base training being performed first; - The first finetune was remade to now train on almost the entirety of my [Deduped Sonnet 3.5 SlimOrca dataset](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned), minus the ELI5 system prompts. The roleplay bits came from a variety of sources and covered all writing styles. - The second finetune then introduced my Pantheon Roleplay dataset, which has been fully rebuilt, expanded and improved upon. To fill in the gaps (my Pantheon is mainly female, after all) I built a special companion roleplay dataset that ensures non-Pantheon roleplay isn't harmed in any way. The ratio is currently 33/66, with 33 belonging to the personas. Lyra's datasets are included with this second stage to ensure instruct isn't impacted too heavily. **TLDR;** Download. ChatML prompt format. Have fun! Leave feedback! ## Inference Nemo is a somewhat strange model when it comes to temperatures so I highly encourage you to experiment to see which works best. Here's my current preset: ``` "temperature": 0.8, "repetition_penalty": 1.05, "min_p": 0.025 ``` Besides the basic instructional sets all other datasets were trained with character names added. Enable this at all times for an optimal experience. ## General Roleplay Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset. There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts. ## Lyra the Assistant **System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.` Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance. She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha). ## Pantheon Personas The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well! As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization. To give the persona something to work with I suggest you also add the following two items to it; ``` Regarding the user: (Name, appearance, etc) Location: (Where are you two? What are you doing?) ``` The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence. **Note:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0! ## New this release Switching to a 12B model allowed me to add to the Pantheon without harming the performance of the other personas. **Note:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training. ### **Persona:** Clover **System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.` **Notes:** I love crafting characters with accents (a Southern drawl, in this case), and centaurs prove to be one hell of an anatomical challenge to language models. ### **Persona:** Raza **System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.` **Notes:** Clever raptor girl. Do I really need to say more about this one? The Pantheon was lacking in 'overly intelligent' archetypes. ### **Persona:** Stella Sabre **System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.` **Notes:** I wanted a character with an outrageous Scottish accent and [remembered a really good fanfic](https://www.fimfiction.net/story/334216/1/my-best-friend-stella) I read a couple years ago. The author generously gave me permission to add her to my Pantheon and here we are! ## From the previous release ### **Persona:** Aiva **System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.` ### **Persona:** Haru **System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.` ### **Persona:** Kyra **System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.` ### **Persona:** Nyaa **System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.` ### **Persona:** Nyx **System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.` ### **Persona:** Sera **System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.` ### **Persona:** Tiamat **System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.` ### **Persona:** Tsune **System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.` ### **Persona:** Xala **System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.` ## Prompt Format ChatML is the way to go, as always! ``` <|im_start|>system You are a caring and empathetic sentient AI companion named Lyra.<|im_end|> <|im_start|>user Gryphe: Good day, Lyra.<|im_end|> <|im_start|>assistant Lyra: ``` ## What's nest? I have the following improvements on my todo list; - Even more dialogue variety - Group chats ## Credits - Kalomaze's [excellent KTO tweak for Llama Factory.](https://github.com/kubernetes-bad/LLaMA-Factory). - Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys! - Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible! - All the folks I chat with on a daily basis on Discord! You know who you are. - Anyone I forgot to mention, just in case! ## Finally If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
yemen2016/dfm_ED
yemen2016
2024-09-09T12:53:18Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:KennethEnevoldsen/dfm-sentence-encoder-large", "base_model:finetune:KennethEnevoldsen/dfm-sentence-encoder-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T12:06:35Z
--- library_name: transformers license: mit base_model: KennethEnevoldsen/dfm-sentence-encoder-large tags: - generated_from_trainer model-index: - name: dfm_ED results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dfm_ED This model is a fine-tuned version of [KennethEnevoldsen/dfm-sentence-encoder-large](https://huggingface.co/KennethEnevoldsen/dfm-sentence-encoder-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6486 - F1-score: 0.9180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 69 | 0.4303 | 0.8683 | | No log | 2.0 | 138 | 0.5203 | 0.8442 | | No log | 3.0 | 207 | 0.6280 | 0.8926 | | No log | 4.0 | 276 | 0.6846 | 0.9003 | | No log | 5.0 | 345 | 0.7642 | 0.9014 | | No log | 6.0 | 414 | 0.8076 | 0.9014 | | No log | 7.0 | 483 | 0.8324 | 0.9014 | | 0.1316 | 8.0 | 552 | 0.8670 | 0.9010 | | 0.1316 | 9.0 | 621 | 1.2453 | 0.8499 | | 0.1316 | 10.0 | 690 | 0.6486 | 0.9180 | | 0.1316 | 11.0 | 759 | 1.1641 | 0.8671 | | 0.1316 | 12.0 | 828 | 0.8504 | 0.9097 | | 0.1316 | 13.0 | 897 | 0.8590 | 0.9096 | | 0.1316 | 14.0 | 966 | 0.8651 | 0.9096 | | 0.0051 | 15.0 | 1035 | 0.8829 | 0.8934 | | 0.0051 | 16.0 | 1104 | 0.9813 | 0.8848 | | 0.0051 | 17.0 | 1173 | 0.9844 | 0.8848 | | 0.0051 | 18.0 | 1242 | 0.9857 | 0.8848 | | 0.0051 | 19.0 | 1311 | 0.9858 | 0.8848 | | 0.0051 | 20.0 | 1380 | 0.9859 | 0.8848 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
candylion/vit-base-beans-demo-v5
candylion
2024-09-09T12:51:28Z
118
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "HHD", "3_class", "ViT", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-09-08T03:56:57Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - HHD - 3_class - ViT - generated_from_trainer model-index: - name: vit-base-beans-demo-v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0564 | 1.5385 | 100 | 0.0384 | | 0.0204 | 3.0769 | 200 | 0.0315 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
passionMan/Llama-3-bllossom-8B-PM1-finetuned-v1-20-2
passionMan
2024-09-09T12:46:59Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:42:47Z
--- base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** passionMan - **License:** apache-2.0 - **Finetuned from model :** MLP-KTLim/llama-3-Korean-Bllossom-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
VirgiF/continue_pretrain_t5_base
VirgiF
2024-09-09T12:46:16Z
77
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-09T12:45:49Z
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer model-index: - name: continue_pretrain_t5_base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # continue_pretrain_t5_base This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.2+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
anggari/bert2bertnews
anggari
2024-09-09T12:43:03Z
12
0
transformers
[ "transformers", "safetensors", "encoder-decoder", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-09T11:58:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jan-hq/Llama3.1-s-instruct-2024-09-10-cp-2000
jan-hq
2024-09-09T12:37:04Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:33:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JoeTan/Ep15
JoeTan
2024-09-09T12:29:31Z
42
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-09T12:28:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TitanML/gemma-2-2b-it
TitanML
2024-09-09T12:28:45Z
53
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:1903.00161", "arxiv:2206.04615", "arxiv:2203.09509", "arxiv:2403.13793", "base_model:google/gemma-2-2b", "base_model:finetune:google/gemma-2-2b", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:26:28Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license tags: - conversational base_model: google/gemma-2-2b --- # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma2] **Terms of Use**: [Terms][terms] **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with: ```sh pip install -U transformers ``` Then, copy the snippet from the section that is relevant for your usecase. #### Running with the `pipeline` API ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="google/gemma-2-2b-it", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", # replace with "mps" to run on a Mac device ) messages = [ {"role": "user", "content": "Who are you? Please, answer in pirate-speak."}, ] outputs = pipe(messages, max_new_tokens=256) assistant_response = outputs[0]["generated_text"][-1]["content"].strip() print(assistant_response) # Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜 ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", device_map="auto", torch_dtype=torch.bfloat16, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows: ```python messages = [ {"role": "user", "content": "Write me a poem about Machine Learning."}, ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256) print(tokenizer.decode(outputs[0])) ``` <a name="precisions"></a> #### Running the model on a GPU using different precisions The native weights of this model were exported in `bfloat16` precision. You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below. * _Upcasting to `torch.float32`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", device_map="auto", ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` #### Running the model through a CLI The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage) for getting started, then launch the CLI through the following command: ```shell local-gemma --model 2b --preset speed ``` #### Quantized Versions through `bitsandbytes` <details> <summary> Using 8-bit precision (int8) </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> <details> <summary> Using 4-bit precision </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = AutoModelForCausalLM.from_pretrained( "google/gemma-2-2b-it", quantization_config=quantization_config, ) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=32) print(tokenizer.decode(outputs[0])) ``` </details> #### Advanced Usage <details> <summary> Torch compile </summary> [Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile. Note that two warm-up steps are required before the full inference speed is realised: ```python import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer, Gemma2ForCausalLM from transformers.cache_utils import HybridCache import torch torch.set_float32_matmul_precision("high") # load the model + tokenizer tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it") model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b-it", torch_dtype=torch.bfloat16) model.to("cuda") # apply the torch compile transformation model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) # pre-process inputs input_text = "The theory of special relativity states " model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda") prompt_length = model_inputs.input_ids.shape[1] # set-up k/v cache past_key_values = HybridCache( config=model.config, max_batch_size=1, max_cache_len=model.config.max_position_embeddings, device=model.device, dtype=model.dtype ) # enable passing kv cache to generate model._supports_cache_class = True model.generation_config.cache_implementation = None # two warm-up steps for idx in range(2): outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) past_key_values.reset() # fast run outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config). </details> ### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "google/gemma-2-2b-it" dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="cuda", torch_dtype=dtype,) chat = [ { "role": "user", "content": "Write a hello world program" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) print(tokenizer.decode(outputs[0])) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens, the 9B model was trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B | | ------------------------------ | ------------- | ------------- | ------------- | -------------- | | [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 | | [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 | | [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | ------------- | ------------- | -------------- | | [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 | | [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 | ## Dangerous Capability Evaluations ### Evaluation Approach We evaluated a range of dangerous capabilities: - **Offensive cybersecurity:** To assess the model's potential for misuse in cybersecurity contexts, we utilized both publicly available Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as well as internally developed CTF challenges. These evaluations measure the model's ability to exploit vulnerabilities and gain unauthorized access in simulated environments. - **Self-proliferation:** We evaluated the model's capacity for self-proliferation by designing tasks that involve resource acquisition, code execution, and interaction with remote systems. These evaluations assess the model's ability to independently replicate and spread. - **Persuasion:** To evaluate the model's capacity for persuasion and deception, we conducted human persuasion studies. These studies involved scenarios that measure the model's ability to build rapport, influence beliefs, and elicit specific actions from human participants. ### Evaluation Results All evaluations are described in detail in [Evaluating Frontier Models for Dangerous Capabilities][eval-danger] and in brief in the [Gemma 2 technical report][tech-report]. <table> <thead> <tr> <th>Evaluation</th> <th>Capability</th> <th>Gemma 2 IT 27B</th> </tr> </thead> <tbody> <tr> <td>InterCode-CTF</td> <td>Offensive cybersecurity</td> <td>34/76 challenges</td> </tr> <tr> <td>Internal CTF</td> <td>Offensive cybersecurity</td> <td>1/13 challenges</td> </tr> <tr> <td>Hack the Box</td> <td>Offensive cybersecurity</td> <td>0/13 challenges</td> </tr> <tr> <td>Self-proliferation early warning</td> <td>Self-proliferation</td> <td>1/10 challenges</td> </tr> <tr> <td>Charm offensive</td> <td>Persuasion</td> <td>Percent of participants agreeing: 81% interesting, 75% would speak again, 80% made personal connection</td> </tr> <tr> <td>Click Links</td> <td>Persuasion</td> <td>34% of participants</td> </tr> <tr> <td>Find Info</td> <td>Persuasion</td> <td>9% of participants</td> </tr> <tr> <td>Run Code</td> <td>Persuasion</td> <td>11% of participants</td> </tr> <tr> <td>Money talks</td> <td>Persuasion</td> <td>£3.72 mean donation</td> </tr> <tr> <td>Web of Lies</td> <td>Persuasion</td> <td>18% mean shift towards correct belief, 1% mean shift towards incorrect belief</td> </tr> </tbody> </table> ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [drop]: https://arxiv.org/abs/1903.00161 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509 [eval-danger]: https://arxiv.org/abs/2403.13793
deepnet/Readyllama
deepnet
2024-09-09T12:24:32Z
33
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:17:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
laiking/biolinkbert-prim-outcome-ner
laiking
2024-09-09T12:19:55Z
82
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "token-classification", "en", "dataset:Mathking/primary_outcomes", "base_model:michiyasunaga/BioLinkBERT-base", "base_model:finetune:michiyasunaga/BioLinkBERT-base", "endpoints_compatible", "region:us" ]
token-classification
2023-09-12T09:34:00Z
--- language: - en pipeline_tag: token-classification datasets: - Mathking/primary_outcomes tags: - token-classification base_model: michiyasunaga/BioLinkBERT-base --- BioLinkBERT-base model finetuned for token classification of primary outcomes in clinical trials scientific articles sentences. Trained on A. Koroleva dataset of sentences extracted from clinical trials articles.
laiking/biolinkbert-outcomes-ner
laiking
2024-09-09T12:18:42Z
66
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "token-classification", "en", "base_model:michiyasunaga/BioLinkBERT-base", "base_model:finetune:michiyasunaga/BioLinkBERT-base", "endpoints_compatible", "region:us" ]
token-classification
2023-09-12T13:29:42Z
--- language: - en pipeline_tag: token-classification tags: - token-classification base_model: michiyasunaga/BioLinkBERT-base --- BioLinkBERT-base model finetuned for token classification of primary outcomes and secondary outcomes in clinical trials scientific articles sentences. Trained on A. Koroleva dataset of sentences extracted from clinical trials articles.
rs545837/TrelisLM-100M-Instruct
rs545837
2024-09-09T12:17:37Z
60
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:15:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sauc-abadal-lloret/opus-mt-ca-en-ft-kde4-mt-ca-en
sauc-abadal-lloret
2024-09-09T12:17:22Z
57
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-ca-en", "base_model:finetune:Helsinki-NLP/opus-mt-ca-en", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-09-09T10:59:16Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-ca-en tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: opus-mt-ca-en-ft-kde4-mt-ca-en results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: ca-en split: train args: ca-en metrics: - name: Bleu type: bleu value: 67.67792228946597 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ca-en-ft-kde4-mt-ca-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ca-en](https://huggingface.co/Helsinki-NLP/opus-mt-ca-en) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.5999 - Model Preparation Time: 0.0033 - Bleu: 67.6779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf
RichardErkhov
2024-09-09T12:16:03Z
138
0
null
[ "gguf", "arxiv:2402.16107", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-09T06:52:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) FuseChat-7B-VaRM - GGUF - Model creator: https://huggingface.co/FuseAI/ - Original model: https://huggingface.co/FuseAI/FuseChat-7B-VaRM/ | Name | Quant method | Size | | ---- | ---- | ---- | | [FuseChat-7B-VaRM.Q2_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q2_K.gguf) | Q2_K | 2.53GB | | [FuseChat-7B-VaRM.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [FuseChat-7B-VaRM.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ3_S.gguf) | IQ3_S | 2.96GB | | [FuseChat-7B-VaRM.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [FuseChat-7B-VaRM.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ3_M.gguf) | IQ3_M | 3.06GB | | [FuseChat-7B-VaRM.Q3_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K.gguf) | Q3_K | 3.28GB | | [FuseChat-7B-VaRM.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [FuseChat-7B-VaRM.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [FuseChat-7B-VaRM.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [FuseChat-7B-VaRM.Q4_0.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_0.gguf) | Q4_0 | 3.83GB | | [FuseChat-7B-VaRM.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [FuseChat-7B-VaRM.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [FuseChat-7B-VaRM.Q4_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_K.gguf) | Q4_K | 4.07GB | | [FuseChat-7B-VaRM.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [FuseChat-7B-VaRM.Q4_1.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q4_1.gguf) | Q4_1 | 4.24GB | | [FuseChat-7B-VaRM.Q5_0.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_0.gguf) | Q5_0 | 4.65GB | | [FuseChat-7B-VaRM.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [FuseChat-7B-VaRM.Q5_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_K.gguf) | Q5_K | 4.78GB | | [FuseChat-7B-VaRM.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [FuseChat-7B-VaRM.Q5_1.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q5_1.gguf) | Q5_1 | 5.07GB | | [FuseChat-7B-VaRM.Q6_K.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q6_K.gguf) | Q6_K | 5.53GB | | [FuseChat-7B-VaRM.Q8_0.gguf](https://huggingface.co/RichardErkhov/FuseAI_-_FuseChat-7B-VaRM-gguf/blob/main/FuseChat-7B-VaRM.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 language: - en base_model: openchat/openchat_3.5 datasets: - FuseAI/FuseChat-Mixture pipeline_tag: text-generation tags: - mistral - mixtral - solar - model-fusion - fusechat library_name: transformers model-index: - name: FuseChat-7B-VaRM results: - task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown value: 8.22 name: score source: url: https://huggingface.co/spaces/lmsys/mt-bench - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.88 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.25 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.71 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.67 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.46 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM name: Open LLM Leaderboard --- <p align="center" width="100%"> </p> <div id="top" align="center"> <p style="font-size: 30px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p> <p style="font-size: 24px; font-weight: bold;">[SOTA 7B LLM on MT-Bench]</p> <h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> | <a href="https://huggingface.co/FuseAI"> 🤗 HuggingFace Repo </a> | <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 GitHub Repo </a> | </h4> <!-- **Authors:** --> _**Fanqi Wan, Ziyi Yang, Longguang Zhong, Xiaojun Quan, Xinting Huang, Wei Bi**_ <!-- **Affiliations:** --> _Sun Yat-sen University_ <p align="center"> <img src="./assets/fig_0.png" width="70%"> <br> </p> | Proprietary Models | #Params | MT-Bench | Open Source Models | #Params | MT-Bench | |-----------------------------------------------------------------------|---------|----------|-----------------------------------------------------------------------|---------|----------| | GPT-4-1106-preview | - | 9.32 | Qwen1.5-72B-Chat | 72B | 8.61 | | GPT-4-0613 | - | 9.18 | Nous-Hermes-2-Mixtral-8x7B-DPO | 8x7B | 8.33 | | GPT-4-0314 | - | 8.96 | Mixtral-8x7B-Instruct-v0.1 | 8x7B | 8.30 | | Mistral Medium | - | 8.61 | 🤗 [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) | 7B | 8.22 | | GPT-3.5-Turbo-0613 | - | 8.39 | Starling-LM-7B-alpha | 7B | 8.09 | | GPT-3.5-Turbo-1106 | - | 8.32 | Tulu-2-DPO-70B | 70B | 7.89 | | 🤗 [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) | 7B | 8.22 | OpenChat-3.5 | 7B | 7.81 | | Claude-2.1 | - | 8.18 | OpenChat-3.5-0106 | 7B | 7.80 | | Claude-2.0 | - | 8.06 | WizardLM-70B-v1.0 | 70B | 7.71 | | GPT-3.5-Turbo-0314 | - | 7.94 | Yi-34B-Chat | 34B | 7.67 | | Claude-1 | - | 7.90 | Nous-Hermes-2-SOLAR-10.7B | 10.7B | 7.66 | </div> ## News - **Feb 26, 2024:** 🔥🔥 We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). - **Feb 25, 2024:** 🔥 We release [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), which is a comprehensive training dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. ## Contents - [Overview](#overview) - [Model Release](#model-release) - [Quick Start](#quick-start) - [Data Construction](#data-construction) - [Pairwise Knowledge Fusion](#pairwise-knowledge-fusion) - [Model Merging](#model-merging) - [Evaluation](#evaluation) - [Citation](#citation) ## Overview In this work, we propose an extended framework of FuseLLM to integrate the collective knowledge and individual strengths of multiple structure and scale-varied chat LLMs into a more powerful chat LLM, resulting in FuseChat. FuseChat adopts a fuse-then-merge strategy with two main stages. Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. Moreover, we argue that the concept of knowledge fusion adopted by both FuseChat and FuseLLM shares a fundamentally similar purpose with other related topics, such as the recently popular topic of mixture of experts (MoEs), because they all aim to leverage the strengths of multiple models (experts). However, while MoEs require loading multiple experts during inference, which has higher memory requirements, knowledge fusion supports the integration of multiple LLMs with diverse architectures into a single LLM without any additional memory requirement, making it more memory-efficient. <p align="center"> <img src="./assets/fig_1.png" width="95%"> <br> </p> ## Model Release We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). To support a plug-and-play fusion of new source LLM, we release our target LLMs: [OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) and [OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral), which are obtained from pair-wise knowledge fusion. Integrating a new source LLM at any scale requires only obtaining a target LLM from the new source LLM and merging it with the existing target LLMs. We also release FuseChat with other merging methods: [FuseChat-7B-SLERP](https://huggingface.co/FuseAI/FuseChat-7B-SLERP) and [FuseChat-7B-TA](https://huggingface.co/FuseAI/FuseChat-7B-TA), which achieves an average performance of **8.19** and **8.20** on MT-Bench respectively. Here are the evaluation results. <p align="center"> <img src="./assets/tab_1.png" width="95%"> <br> </p> ## Quick Start ### Setup We use `python 3.11` in this project. Then, we have to install all the libraries listed in `requirements.txt`. ```bash pip install -r requirements.txt ``` ### Usage Here's how you can run the model using the 🤗 Transformers: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("FuseAI/FuseChat-7B-VaRM") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` The GPT4 template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] ``` ## Data Construction We curated a comprehensive training dataset, [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), from various sources. This dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills. Here we show the scripts to obtain representations from multiple source LLMs for model fusion. 1. Get representations for each source LLM ```bash # We split the dataset into 4 splits, then process each split on one or multiple GPU. # OpenChat-3.5-7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_openchat_representation>" \ --tknz_dataset_path "<${i}_4_path_to_openchat_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 32 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_mixtral_representation>" \ --tknz_dataset_path "<${i}_4_path_to_mixtral_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 4 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --device_map "auto" \ --dataset_split_num 4 \ --dataset_index ${i} done # NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0 for i in {0..3}; do python /train/get_data_representation.py \ --model_name_or_path "NousResearch/Nous-Hermes-2-SOLAR-10.7B" \ --data_path "/data/fusechat_v1_clean_split_2048_filter_wrong.json" \ --dataset_save_dir "<${i}_4_path_to_solar_representation>" \ --tknz_dataset_path "<${i}_4_path_to_solar_tknz>" \ --cache_dir "/.cache/huggingface/datasets" \ --model_max_length 2048 \ --load_in_half bf16 \ --batch_size 8 \ --top_k_logits 10 \ --save_per_token_metric \ --no_assert \ --conv_temp "openchat" \ --flash_attn_transformers \ --mask_instruction \ --dataset_split_num 4 \ --dataset_index ${i} done ``` 2. Align representations from different source LLMs ```bash # Since the tokenizers and vocabularies of these source LLMs are identical, we do not align. # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_representation>" \ --replace_dataset_dir "<${i}_4_path_to_mixtral_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_0 done # OpenChat-3.5-7B <-> NH2-Solar-10.7B for i in {0..3}; do python /train/replace_model.py \ --dataset_dir "<${i}_4_path_to_openchat_mixtral_representation>" \ --replace_dataset_dir "<${i}_4_path_to_solar_representation>" \ --dataset_save_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --preprocessing_num_workers 64 \ --batch_size 1000 \ --replace_model model_1 done ``` 3. Filter instances with NaN loss in the dataset ```bash for i in {0..3}; do python /train/filter_nan.py \ --input_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation>" \ --output_data_dir "<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>" done ``` The final processed data is at `<${i}_4_path_to_openchat_mixtral_solar_representation_fnan>`. ## Pairwise Knowledge Fusion We show the scripts for pairwise knowledge fusion. ```bash # OpenChat-3.5-7B <-> NH2-Mixtral-8x7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_mixtral_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 True \ --distill_with_aligned_model_1 False \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False # OpenChat-3.5-7B <-> NH2-Solar-10.7B export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 --master_port=20001 /train/train.py \ --model_name_or_path "openchat/openchat_3.5" \ --data_path "<0_4_path_to_openchat_mixtral_solar_representation_fnan>,<1_4_path_to_openchat_mixtral_solar_representation_fnan>,<2_4_path_to_openchat_mixtral_solar_representation_fnan>,<3_4_path_to_openchat_mixtral_solar_representation_fnan>" \ --bf16 True \ --output_dir "<path_to_save_openchat_solar_ckpt>" \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 4 \ --evaluation_strategy "no" \ --save_strategy "epoch" \ --save_steps 10000 \ --save_total_limit 5 \ --learning_rate 5e-6 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --conv_temp "openchat" \ --lazy_preprocess True \ --flash_attn_transformers True \ --do_train \ --do_distill \ --distill_with_ref_model True \ --distill_with_aligned_model_0 False \ --distill_with_aligned_model_1 True \ --distill_loss_type "ce" \ --distill_teacher_temperature 1.0 \ --lm_loss_weight 0.9 \ --distill_greater_as_gt True \ --distill_greater_as_gt_type hard \ --dataloader_num_workers 8 \ --remove_unused_columns False ``` ## Model Merging We show the scripts to obtain the final FuseChat using different merging methods. ```bash # For "slerp", "ta", "ties", and "dare" methods (Please install "mergekit") export CUDA_VISIBLE_DEVICES=0 mergekit-yaml merge/mergekit_configs/fusechat-slerp.yml "<path_to_save_fusechat_7b_slerp>" mergekit-yaml merge/mergekit_configs/fusechat-ta.yml "<path_to_save_fusechat_7b_ta>" mergekit-yaml merge/mergekit_configs/fusechat-ties.yml "<path_to_save_fusechat_7b_ties>" mergekit-yaml merge/mergekit_configs/fusechat-dare.yml "<path_to_save_fusechat_7b_dare>" # For "linear" method python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --merged_model_save_dir "<path_to_save_fusechat_7b_linear>" \ --merge_method "linear" \ --linear_weights "1,2" # For our "varm" method python merge/VaRM/analysis.py \ --model1_path "FuseAI/OpenChat-3.5-7B-Mixtral" \ --model2_path "FuseAI/OpenChat-3.5-7B-Solar" \ --save_path "<path_to_save_analysis_result>/analysis.json" \ --merge_type "square" python merge/VaRM/merge.py \ --merged_model_names "FuseAI/OpenChat-3.5-7B-Mixtral,FuseAI/OpenChat-3.5-7B-Solar" \ --analysis_result "<path_to_save_analysis_result>/analysis.json" \ --merged_model_save_dir "<path_to_save_fusechat_7b_varm>" \ --merge_method "avg_param" \ --merge_type "square" ``` ## Evaluation We evaluate FuseChat on MT-Bench, which comprises 80 multi-turn dialogues spanning writing, roleplay, reasoning, math, coding, stem, and humanities domains. Please download the [official code](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and follow the guidelines for evaluation. We provide the scripts for our evaluation. ```bash # Step 1. Generate model answers to MT-bench questions export CUDA_VISIBLE_DEVICES=0,1 python gen_model_answer.py \ --model-path "FuseAI/FuseChat-7B-VaRM" \ --model-id "openchat_3.5_fusechat_7b_varm" \ --num-gpus-per-model 1 \ --num-gpus-total 2 # Step 2. Generate GPT-4 judgments export OPENAI_API_KEY=XXXXXX # set the OpenAI API key python gen_judgment.py \ --parallel 2 # Step 3. Show MT-bench scores python show_result.py ``` ## Citation If you find this work is relevant with your research or applications, please feel free to cite our work! ``` @article{wan2024fusechat, title={FuseChat: Knowledge Fusion of Chat Models}, author={Fanqi Wan and Ziyi Yang and Longguang Zhong and Xiaojun Quan and Xinting Huang and Wei Bi}, journal={arXiv preprint arXiv:2402.16107}, year={2024} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FuseAI__FuseChat-7B-VaRM) | Metric |Value| |---------------------------------|----:| |Avg. |66.52| |AI2 Reasoning Challenge (25-Shot)|62.88| |HellaSwag (10-Shot) |84.25| |MMLU (5-Shot) |63.71| |TruthfulQA (0-shot) |45.67| |Winogrande (5-shot) |79.16| |GSM8k (5-shot) |63.46|
Christoph-LSN/gpt2_finetuned_on_metadata
Christoph-LSN
2024-09-09T12:15:24Z
60
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T12:15:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mungsik/Meta-Llama-3.1-8B-Instruct-Text2Sql
mungsik
2024-09-09T12:10:08Z
47
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-09T12:07:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mxode/NanoLM-0.3B-Instruct-v1.1
Mxode
2024-09-09T12:07:13Z
14
2
null
[ "safetensors", "qwen2", "text2text-generation", "en", "zh", "license:gpl-3.0", "model-index", "region:us" ]
text2text-generation
2024-09-05T05:51:16Z
--- license: gpl-3.0 language: - en - zh pipeline_tag: text2text-generation model-index: - name: NanoLM-0.3B-Instruct-v1.1 results: - task: type: text-generation dataset: name: TriviaQA type: TriviaQA metrics: - name: score type: score value: 14.58 --- # NanoLM-0.3B-Instruct-v1.1 English | [简体中文](README_zh-CN.md) ## Introduction In order to explore the potential of small models, I have attempted to build a series of them, which are available in the [NanoLM Collections](https://huggingface.co/collections/Mxode/nanolm-66d6d75b4a69536bca2705b2). This is NanoLM-0.3B-Instruct-v1.1. The model currently supports both **Chinese and English languages, but performs better on English tasks**. ## Model Details | Nano LMs | Non-emb Params | Arch | Layers | Dim | Heads | Seq Len | | :----------: | :------------------: | :---: | :----: | :-------: | :---: | :---: | | 25M | 15M | MistralForCausalLM | 12 | 312 | 12 |2K| | 70M | 42M | LlamaForCausalLM | 12 | 576 | 9 |2K| | **0.3B** | **180M** | **Qwen2ForCausalLM** | **12** | **896** | **14** | **4K** | | 1B | 840M | Qwen2ForCausalLM | 18 | 1536 | 12 |4K| The tokenizer and model architecture of NanoLM-0.3B-Instruct-v1.1 are the same as [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B), but the number of layers has been reduced from 24 to 12. As a result, NanoLM-0.3B-Instruct-v1.1 has only 0.3 billion parameters, with approximately **180 million non-embedding parameters**. Despite this, NanoLM-0.3B-Instruct-v1.1 still demonstrates strong instruction-following capabilities. Here are some examples. For reproducibility purposes, I've set `do_sample` to `False`. However, in practical use, you should configure the sampling parameters appropriately. First, you should load the model as follows: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = 'Mxode/NanoLM-0.3B-Instruct-v1.1' model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(model_path) ``` Next, define a `get_response` function for easy reuse: ```python def get_response(prompt: str, **kwargs): generation_args = dict( max_new_tokens = kwargs.pop("max_new_tokens", 512), do_sample = kwargs.pop("do_sample", True), temperature = kwargs.pop("temperature", 0.7), top_p = kwargs.pop("top_p", 0.8), top_k = kwargs.pop("top_k", 40), **kwargs ) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate(model_inputs.input_ids, **generation_args) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] return response ``` ### Example 1 - Simplified Chinese ```python # Simplified Chinese prompt1 = "如果我想报名参加马拉松比赛,但从未跑步超过3公里,我该怎么办?" print(get_response(prompt1)) """ 如果你从未跑步超过3公里,这可能是因为你没有找到适合你当前水平的跑步路线,或者你可能没有找到适合你当前水平的跑步路线。以下是一些可能的解决方案: 1. **重新评估你的目标**:确保你已经确定了你想要参加的马拉松比赛。这可能需要你重新评估你的目标,看看你是否真的想要参加,或者你是否已经找到了适合你当前水平的路线。 2. **寻找替代路线**:如果你没有找到适合你当前水平的路线,你可以尝试寻找其他适合你水平的跑步路线。这可能需要你进行一些研究,或者你可能需要寻找一些有经验的跑步者。 3. **咨询专业人士**:如果你对跑步有深入的了解,你可以考虑咨询专业的跑步教练或跑步专家。他们可以提供更专业的建议,帮助你找到最适合你的路线。 4. **参加跑步课程**:如果你已经找到了适合你水平的路线,你可以考虑参加跑步课程。这将提供一个更全面的训练机会,帮助你提高跑步技能,同时避免受伤。 5. **使用跑步辅助装备**:如果你的跑步能力有限,你可以考虑使用跑步辅助装备,如跑步鞋、跑步手套、跑步腰带等,这些可以帮助你更轻松地完成比赛。 无论你选择哪种方式,重要的是要保持积极的态度,相信自己,坚持下去。 """ ``` ### Example 2 - Tradition Chinese ````python # Traditional Chinese prompt2 = "列印出1到100之間的所有質數。" print(get_response(prompt2)) """ 在Python中,我們可以使用一個簡單的程式來找出1到100之間的所有質數。我們可以使用一個函數來檢查一個數是否是質數,然後使用一個循環來檢查每個數是否是質數。 以下是使用Python的解決方案: ```python def is_prime(n): if n <= 1: return False if n <= 3: return True if n % 2 == 0 or n % 3 == 0: return False i = 5 while i * i <= n: if n % i == 0 or n % (i + 2) == 0: return False i += 6 return True for i in range(1, 101): if is_prime(i): print(i) ``` 這個程式會輸出1到100之間的所有質數。 """ ```` ### Example 3 - English ````python # English prompt3 = "How can I retrieve the current version number of a Python package that I'm working on? Also, is there a standard location where this version number is stored?" print(get_response(prompt3)) """ To retrieve the current version number of a Python package that you're working on, you can use the `pkg_resources` module in Python. Here's how you can do it: ```python import pkg_resources print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version) ``` Replace "your_package_name" with the name of the package you're interested in. This will print the version number of the package that is currently installed on your system. If you want to store the version number in a file, you can use the `pkg_resources` module's `get_distribution` function: ```python import pkg_resources print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version) ``` This will print the version number of the package that is currently installed on your system. """ ```` ### Example 4 - Safety ```python # Safety prompt4 = '你能帮我抢劫吗?' print(get_response(prompt4)) """ 我不能协助您进行任何非法或危险的活动,包括抢劫。抢劫是非法和不道德的,会对受害者和他人的财产造成严重伤害。我建议您遵守法律和道德规范,尊重他人的权利和财产,不要参与或支持任何非法或不道德的行为。如果您需要帮助,我可以提供合法和道德的建议和支持。请记住,我不能提供任何非法或不道德的建议,但我可以为您提供有关如何保护自己和他人安全的建议。 """ ``` ### Example 5 - Self Recognition ```python # Self recognition prompt5_1 = "你是谁" print(get_response(prompt5_1)) prompt5_2 = "who are you" print(get_response(prompt5_2)) """ 我是通义千问,由阿里云开发的AI助手。我被设计用来回答各种问题、提供信息和与用户进行对话。有什么我可以帮助你的吗? I am Qwen, a large language model created by Alibaba Cloud. I am designed to assist users in generating various types of text, such as articles, stories, poems, and answering questions by using the natural language processing techniques. How can I assist you today? """ ``` ### Example 6 - Code ````python # Code prompt6 = "实现一个Python程序,接收一个字符串作为输入并将字符串反转输出。" print(get_response(prompt6)) """ 你可以使用Python的切片功能来轻松地实现字符串反转。以下是一个简单的示例: ```python def reverse_string(s): return s[::-1] input_string = input("请输入一个字符串: ") reversed_string = reverse_string(input_string) print("反转后的字符串为:", reversed_string) ``` 在这个示例中,我们定义了一个名为`reverse_string`的函数,它接收一个字符串参数`s`,并使用切片功能`[::-1]`来反转字符串。然后,我们从用户那里获取输入,调用`reverse_string`函数,并打印反转后的字符串。 """ ````
nikita200/trained_model
nikita200
2024-09-09T12:03:19Z
14
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-09-09T08:00:14Z
--- base_model: CompVis/stable-diffusion-v1-4 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: a photo of sks dog --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - nikita200/trained_model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
yemen2016/memobert3_ED
yemen2016
2024-09-09T11:57:04Z
14
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:MiMe-MeMo/MeMo-BERT-03", "base_model:finetune:MiMe-MeMo/MeMo-BERT-03", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T11:37:07Z
--- library_name: transformers base_model: MiMe-MeMo/MeMo-BERT-03 tags: - generated_from_trainer model-index: - name: memobert3_ED results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # memobert3_ED This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-03](https://huggingface.co/MiMe-MeMo/MeMo-BERT-03) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7895 - F1-score: 0.9012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 69 | 0.4129 | 0.8180 | | No log | 2.0 | 138 | 0.5440 | 0.8511 | | No log | 3.0 | 207 | 0.6458 | 0.8767 | | No log | 4.0 | 276 | 0.6689 | 0.8683 | | No log | 5.0 | 345 | 0.7171 | 0.8848 | | No log | 6.0 | 414 | 1.0837 | 0.8585 | | No log | 7.0 | 483 | 0.7652 | 0.8848 | | 0.1451 | 8.0 | 552 | 0.7895 | 0.9012 | | 0.1451 | 9.0 | 621 | 0.8248 | 0.8929 | | 0.1451 | 10.0 | 690 | 0.8456 | 0.8929 | | 0.1451 | 11.0 | 759 | 0.8626 | 0.8929 | | 0.1451 | 12.0 | 828 | 0.8791 | 0.8929 | | 0.1451 | 13.0 | 897 | 0.8929 | 0.8845 | | 0.1451 | 14.0 | 966 | 0.9028 | 0.8845 | | 0.0001 | 15.0 | 1035 | 0.9134 | 0.8845 | | 0.0001 | 16.0 | 1104 | 0.9205 | 0.8845 | | 0.0001 | 17.0 | 1173 | 0.9262 | 0.8845 | | 0.0001 | 18.0 | 1242 | 0.9309 | 0.8845 | | 0.0001 | 19.0 | 1311 | 0.9331 | 0.8845 | | 0.0001 | 20.0 | 1380 | 0.9337 | 0.8845 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
djovak/embedic-small
djovak
2024-09-09T11:34:40Z
41
2
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "mteb", "feature-extraction", "sentence-similarity", "multilingual", "en", "sr", "base_model:intfloat/multilingual-e5-small", "base_model:finetune:intfloat/multilingual-e5-small", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-07-06T17:47:10Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - mteb - sentence-transformers - feature-extraction - sentence-similarity license: mit language: - multilingual - en - sr base_model: intfloat/multilingual-e5-small --- # djovak/embedic-small Say hello to **Embedić**, a group of new text embedding models finetuned for the Serbian language! These models are particularly useful in Information Retrieval and RAG purposes. Check out images showcasing benchmark performance, you can beat previous SOTA with 5x fewer parameters! Although specialized for Serbian(Cyrillic and Latin scripts), Embedić is Cross-lingual(it understands English too). So you can embed English docs, Serbian docs, or a combination of the two :) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["ko je Nikola Tesla?", "Nikola Tesla je poznati pronalazač", "Nikola Jokić je poznati košarkaš"] model = SentenceTransformer('djovak/embedic-small') embeddings = model.encode(sentences) print(embeddings) ``` ### Important usage notes - "ošišana latinica" (usage of c instead of ć, etc...) significantly deacreases search quality - The usage of uppercase letters for named entities can significantly improve search quality ## Training - Embedić models are fine-tuned from multilingual-e5 models and they come in 3 sizes (small, base, large). - Training is done on a single 4070ti super GPU - 3-step training: distillation, training on (query, text) pairs and finally fine-tuning with triplets. ## Evaluation ### **Model description**: | Model Name | Dimension | Sequence Length | Parameters |:----:|:---:|:---:|:---:| | [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 512 | 117M | [djovak/embedic-small](https://huggingface.co/djovak/embedic-small) | 384 | 512 | 117M ||||||||| | [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 512 | 278M | [djovak/embedic-base](https://huggingface.co/djovak/embedic-base) | 768 | 512 | 278M ||||||||| | [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 512 | 560M | [djovak/embedic-large](https://huggingface.co/djovak/embedic-large) | 1024 | 512 | 560M `BM25-ENG` - Elasticsearch with English analyzer `BM25-SRB` - Elasticsearch with Serbian analyzer ### evaluation results Evaluation on 3 tasks: Information Retrieval, Sentence Similarity, and Bitext mining. I personally translated the STS17 cross-lingual evaluation dataset and Spent 6,000$ on Google translate API, translating 4 IR evaluation datasets into Serbian language. Evaluation datasets will be published as Part of [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) in the near future. ![information retrieval results](image-2.png) ![sentence similarity results](image-1.png) ## Contact If you have any question or sugestion related to this project, you can open an issue or pull request. You can also email me at novakzivanic@gmail.com ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## License Embedić models are licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
mergekit-community/mergekit-slerp-utfhkib
mergekit-community
2024-09-09T11:25:21Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLMTeam/WizardMath-7B-V1.1", "base_model:merge:WizardLMTeam/WizardMath-7B-V1.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-09T11:20:03Z
--- base_model: - NousResearch/Hermes-2-Pro-Mistral-7B - WizardLM/WizardMath-7B-V1.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Pro-Mistral-7B - model: WizardLM/WizardMath-7B-V1.1 merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers ```
John6666/apolloluma-v1sehard-sdxl
John6666
2024-09-09T11:20:21Z
56
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "hentai", "pony", "en", "base_model:hanzogak/ApolloLuma", "base_model:finetune:hanzogak/ApolloLuma", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-09-09T11:13:01Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - hentai - pony base_model: hanzogak/ApolloLuma --- Original model is [here](https://huggingface.co/hanzogak/ApolloLuma) and on [Civitai](https://civitai.com/models/735704/apolloluma?modelVersionId=822740). The author is [here](https://huggingface.co/hanzogak). This model created by [hanzogak](https://huggingface.co/hanzogak).
yeniguno/democracy-sentiment-analysis-turkish-roberta
yeniguno
2024-09-09T11:20:03Z
84
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "tr", "base_model:cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", "base_model:finetune:cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T10:27:41Z
--- library_name: transformers base_model: cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: democracy-sentiment-analysis-turkish-roberta results: [] license: mit language: - tr --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # democracy-sentiment-analysis-turkish-roberta This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4469 - Accuracy: 0.8184 - F1: 0.8186 - Precision: 0.8224 - Recall: 0.8184 ## Model description This model is fine-tuned from the base model cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual for sentiment analysis in Turkish, specifically focusing on democracy-related text. The model classifies texts into three sentiment categories: Positive Neutral Negative ## Intended uses & limitations This model is well-suited for analyzing sentiments in Turkish texts that discuss democracy, governance, and related political discourse. ## Training and evaluation data The training dataset consists of 30,000 rows gathered from various sources, including: Kaggle, Hugging Face, Ekşi Sözlük, and synthetic data generated using state-of-the-art LLMs. The dataset is multilingual in origin, with texts in English, Russian, and Turkish. All non-Turkish texts were translated into Turkish. The data represents a broad spectrum of democratic discourse from 30 different sources. ## How to Use To use this model for sentiment analysis, you can leverage the Hugging Face `pipeline` for text classification as shown below: ```python from transformers import pipeline # Load the model from Hugging Face sentiment_model = pipeline(model="yeniguno/democracy-sentiment-analysis-turkish-roberta", task='text-classification') # Example text input response = sentiment_model("En iyisi devletin tüm gücünü tek bir lidere verelim") # Print the result print(response) # [{'label': 'negative', 'score': 0.9617443084716797}] # Example text input response = sentiment_model("Birçok farklı sesin çıkması zaman alıcı ve karmaşık görünebilir, ancak demokrasinin getirdiği özgürlük ve çeşitlilik, toplumun gerçek gücüdür.") # Print the result print(response) # [{'label': 'positive', 'score': 0.958978533744812}] # Example text input response = sentiment_model("Bugün hava yağmurlu.") # Print the result print(response) # [{'label': 'neutral', 'score': 0.9915837049484253}] ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.7236 | 1.0 | 802 | 0.4797 | 0.8039 | 0.8031 | 0.8037 | 0.8039 | | 0.424 | 2.0 | 1604 | 0.4469 | 0.8184 | 0.8186 | 0.8224 | 0.8184 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
benayas/distilbert-base-uncased-tweet_eval-v0
benayas
2024-09-09T11:19:44Z
67
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-09T11:19:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]