modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
mnemic/HalloweenGlowStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:45:00Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:10:31Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: HalloweenGlowStyle --- # HalloweenGlowStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349105) ## Trigger Words ```HalloweenGlowStyle``` ![Model Preview](https://huggingface.co/mnemic/HalloweenGlowStyleXL-SDXL-LoRA/raw/main/HalloweenGlowStyleXL.preview.png) A glowing Halloween style.
mnemic/GalacticEmpireStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:44:53Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:09:20Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: GalacticEmpireStyle, white --- # GalacticEmpireStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347760) ## Trigger Words ```GalacticEmpireStyle, white``` ![Model Preview](https://huggingface.co/mnemic/GalacticEmpireStyleXL-SDXL-LoRA/raw/main/GalacticEmpireStyleXL.preview.png) Use this LoRA to find those rebel scum!
mnemic/GaelicPatternStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:44:49Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:08:46Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: GaelicPatternStyle --- # GaelicPatternStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/340410) ## Trigger Words ```GaelicPatternStyle``` ![Model Preview](https://huggingface.co/mnemic/GaelicPatternStyleXL-SDXL-LoRA/raw/main/GaelicPatternStyleXL.preview.png) Intricate abstract patterns in a Gaelic style.
mnemic/FluffyStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:44:45Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:08:11Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: FluffyStyle --- # FluffyStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347734) ## Trigger Words ```FluffyStyle``` ![Model Preview](https://huggingface.co/mnemic/FluffyStyleXL-SDXL-LoRA/raw/main/FluffyStyleXL.preview.png) Fluffy, furry, fuzzy soft and cuddly things!
mnemic/ExpeditionStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:44:42Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:07:39Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: ExpeditionStyle --- # ExpeditionStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347798) ## Trigger Words ```ExpeditionStyle``` ![Model Preview](https://huggingface.co/mnemic/ExpeditionStyleXL-SDXL-LoRA/raw/main/ExpeditionStyleXL.preview.png) Go on an adventure in a hidden tomb under the desert.
mnemic/ElementWindSDXL-SDXL-LoRA
mnemic
2024-06-17T22:44:38Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:07:02Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: ElementWind --- # ElementWindSDXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/488713) ## Trigger Words ```ElementWind``` ![Model Preview](https://huggingface.co/mnemic/ElementWindSDXL-SDXL-LoRA/raw/main/ElementWindSDXL.preview.png) This LoRA blows.
John6666/bunny-mint-sdxl
John6666
2024-06-17T22:44:33Z
3,110
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-17T12:51:15Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://huggingface.co/lylogummy/BunnyMint) and on [Civitai](https://civitai.com/models/520972/bunny-mint?modelVersionId=578826).
mnemic/ElementsMixSDXL-SDXL-LoRA
mnemic
2024-06-17T22:44:31Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:05:57Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: ElementsMix, fire, water, wind, earth --- # ElementsMixSDXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/493764) ## Trigger Words ```ElementsMix, fire, water, wind, earth``` ![Model Preview](https://huggingface.co/mnemic/ElementsMixSDXL-SDXL-LoRA/raw/main/ElementsMixSDXL.preview.png) A Mixture of Elements model. The sum turned out to be better than the parts!
mnemic/ElementFireSDXL-SDXL-LoRA
mnemic
2024-06-17T22:44:24Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:05:27Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: ElementFire --- # ElementFireSDXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/488743) ## Trigger Words ```ElementFire``` ![Model Preview](https://huggingface.co/mnemic/ElementFireSDXL-SDXL-LoRA/raw/main/ElementFireSDXL.preview.png) A fiery elemental flavor.
mnemic/DeadpoolStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:44:17Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T16:04:24Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: DeadpoolStyle --- # DeadpoolStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349196) ## Trigger Words ```DeadpoolStyle``` ![Model Preview](https://huggingface.co/mnemic/DeadpoolStyleXL-SDXL-LoRA/raw/main/DeadpoolStyleXL.preview.png) Touch yourself tonight <3
mnemic/CarnageStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:43:27Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:55:54Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: CarnageStyle --- # CarnageStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347751) ## Trigger Words ```CarnageStyle``` ![Model Preview](https://huggingface.co/mnemic/CarnageStyleXL-SDXL-LoRA/raw/main/CarnageStyleXL.preview.png) Some kind of Carnage style. It's not as strong as the SD1.5 version.
mnemic/CardboardStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:43:23Z
0
1
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:55:32Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: CardboardStyle --- # CardboardStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/277115) ## Trigger Words ```CardboardStyle``` ![Model Preview](https://huggingface.co/mnemic/CardboardStyleXL-SDXL-LoRA/raw/main/CardboardStyleXL.preview.png) Things are now made out of cardboard!
mnemic/BatmanCoreXL-SDXL-LoRA
mnemic
2024-06-17T22:43:12Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:53:59Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: BatmanCore --- # BatmanCoreXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/349125) ## Trigger Words ```BatmanCore``` ![Model Preview](https://huggingface.co/mnemic/BatmanCoreXL-SDXL-LoRA/raw/main/BatmanCoreXL.preview.png) It's Batman! It puts spikes and wings on things and black armor on people.
mnemic/BarbieCoreXL-SDXL-LoRA
mnemic
2024-06-17T22:43:08Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:53:31Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: BarbieCore --- # BarbieCoreXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/347252) ## Trigger Words ```BarbieCore``` ![Model Preview](https://huggingface.co/mnemic/BarbieCoreXL-SDXL-LoRA/raw/main/BarbieCoreXL.preview.png) Pink and plastic, and quite fantastic!
mnemic/AbstractPatternStyleXL-SDXL-LoRA
mnemic
2024-06-17T22:43:05Z
0
0
null
[ "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:53:02Z
--- license: gpl-3.0 base_model: stabilityai/stable-diffusion-xl-base-1.0 trained_words: AbstractPatternStyle --- # AbstractPatternStyleXL - SDXL - LoRA [CivitAI Page](https://civitai.com/models/346675) ## Trigger Words ```AbstractPatternStyle``` ![Model Preview](https://huggingface.co/mnemic/AbstractPatternStyleXL-SDXL-LoRA/raw/main/AbstractPatternStyleXL.preview.png) Creates beautiful abstract patterns. A really fun model to add on top of anything for some colorful creativity!
mnemic/WaffleStyle-SD1.5-LoRA
mnemic
2024-06-17T22:42:51Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:51:54Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: WaffleStyle --- # WaffleStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/116280) ## Trigger Words ```WaffleStyle``` ![Model Preview](https://huggingface.co/mnemic/WaffleStyle-SD1.5-LoRA/raw/main/WaffleStyle.preview.png) Adds a lot of square grids to things.
mnemic/SemlaStyle-SD1.5-LoRA
mnemic
2024-06-17T22:42:34Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:49:52Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: SemlaStyle --- # SemlaStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/) ## Trigger Words ```SemlaStyle``` ![Model Preview](https://huggingface.co/mnemic/SemlaStyle-SD1.5-LoRA/raw/main/SemlaStyle.preview.png) Anything is better in semla form.
mnemic/NESVoxelStyle-SD1.5-LoRA
mnemic
2024-06-17T22:42:16Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:47:58Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: NESVoxelStyle --- # NESVoxelStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/222945) ## Trigger Words ```NESVoxelStyle``` ![Model Preview](https://huggingface.co/mnemic/NESVoxelStyle-SD1.5-LoRA/raw/main/NESVoxelStyle.preview.png) An experimental Voxel model.
mnemic/NESStyle-SD1.5-LoRA
mnemic
2024-06-17T22:42:13Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:47:38Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: NESStyle --- # NESStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/146159) ## Trigger Words ```NESStyle``` ![Model Preview](https://huggingface.co/mnemic/NESStyle-SD1.5-LoRA/raw/main/NESStyle.preview.png) What if everything was as beautiful as a NES?
mnemic/MinionStyle-SD1.5-LoRA
mnemic
2024-06-17T22:42:09Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:47:18Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: MinionStyle --- # MinionStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/147313) ## Trigger Words ```MinionStyle``` ![Model Preview](https://huggingface.co/mnemic/MinionStyle-SD1.5-LoRA/raw/main/MinionStyle.preview.png) Anything can be a minion!
mnemic/JediStyle-SD1.5-LoRA
mnemic
2024-06-17T22:42:05Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:46:57Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: JediStyle --- # JediStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/269702) ## Trigger Words ```JediStyle``` ![Model Preview](https://huggingface.co/mnemic/JediStyle-SD1.5-LoRA/raw/main/JediStyle.preview.png) A touch of Jedi. Mostly adds blue lasers to beige stuff, but sometimes this is just what you need.
mnemic/ElementFire-SD1.5-LoRA
mnemic
2024-06-17T22:41:21Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:43:19Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: ElementFire --- # ElementFire - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/488734) ## Trigger Words ```ElementFire``` ![Model Preview](https://huggingface.co/mnemic/ElementFire-SD1.5-LoRA/raw/main/ElementFire.preview.png) A fierce elemental flavor.
mnemic/DeadpoolStyle-SD1.5-LoRA
mnemic
2024-06-17T22:41:12Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:42:47Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: DeadpoolStyle --- # DeadpoolStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/222933) ## Trigger Words ```DeadpoolStyle``` ![Model Preview](https://huggingface.co/mnemic/DeadpoolStyle-SD1.5-LoRA/raw/main/DeadpoolStyle.preview.png) Touch yourself tonight <3
mnemic/DavyJonesLockerStyle-SD1.5-LoRA
mnemic
2024-06-17T22:41:09Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:42:26Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: DavyJonesLockerStyle --- # DavyJonesLockerStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/220258) ## Trigger Words ```DavyJonesLockerStyle``` ![Model Preview](https://huggingface.co/mnemic/DavyJonesLockerStyle-SD1.5-LoRA/raw/main/DavyJonesLockerStyle.preview.png) Adds a bit of underwater musky smell to all your images.
mnemic/CardboardStyle-SD1.5-LoRA
mnemic
2024-06-17T22:40:27Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:37:15Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: CardboardStyle --- # CardboardStyle - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/277115) ## Trigger Words ```CardboardStyle``` ![Model Preview](https://huggingface.co/mnemic/CardboardStyle-SD1.5-LoRA/raw/main/CardboardStyle.preview.png) Things are now made out of cardboard!
mnemic/BatmanCore-SD1.5-LoRA
mnemic
2024-06-17T22:40:17Z
0
0
null
[ "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:gpl-3.0", "region:us" ]
null
2024-06-17T15:28:49Z
--- license: gpl-3.0 base_model: runwayml/stable-diffusion-v1-5 trained_words: BatmanCore --- # BatmanCore - SD1.5 - LoRA [CivitAI Page](https://civitai.com/models/197894) ## Trigger Words ```BatmanCore``` ![Model Preview](https://huggingface.co/mnemic/BatmanCore-SD1.5-LoRA/raw/main/BatmanCore.preview.png) It's Batman! It puts spikes and wings on things and black armor on people.
samira1234/T5-French_DZdaridja_translator
samira1234
2024-06-17T22:34:04Z
5
0
transformers
[ "transformers", "t5", "text2text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-17T11:10:39Z
--- license: other license_name: samirakheraba-aminayousrakadri license_link: LICENSE ---
John6666/ebara-mfcg-pony-mix-v12-sdxl
John6666
2024-06-17T22:26:37Z
6,338
3
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-17T22:21:37Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/466637/ebaramfcgponymix?modelVersionId=579053).
Babelscape/cner-base
Babelscape
2024-06-17T22:22:17Z
79
4
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "token-classification", "named-entity-recognition", "sequence-tagger-model", "en", "dataset:Babelscape/cner", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-04-10T07:55:44Z
--- annotations_creators: - machine-generated language_creators: - machine-generated widget: - text: George Washington went to Washington. - text: What is the seventh tallest mountain in North America? tags: - named-entity-recognition - sequence-tagger-model datasets: - Babelscape/cner language: - en pretty_name: cner-model source_datasets: - original task_categories: - structure-prediction task_ids: - named-entity-recognition --- # CNER: Concept and Named Entity Recognition This is the model card for the NAACL 2024 paper [CNER: Concept and Named Entity Recognition](https://aclanthology.org/2024.naacl-long.461/). We fine-tuned a language model (DeBERTa-v3-base) for 1 epoch on our [CNER dataset](https://huggingface.co/datasets/Babelscape/cner) using the default hyperparameters, optimizer and architecture of Hugging Face, therefore the results of this model may differ from the ones presented in the paper. The resulting CNER model is able to jointly identifying and classifying concepts and named entities with fine-grained tags. **If you use the model, please reference this work in your paper**: ```bibtex @inproceedings{martinelli-etal-2024-cner, title = "{CNER}: Concept and Named Entity Recognition", author = "Martinelli, Giuliano and Molfese, Francesco and Tedeschi, Simone and Fern{\'a}ndez-Castro, Alberte and Navigli, Roberto", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.461", pages = "8329--8344", } ``` The original repository for the paper can be found at [https://github.com/Babelscape/cner](https://github.com/Babelscape/cner). ## How to use You can use this model with Transformers NER *pipeline*. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Babelscape/cner-model") model = AutoModelForTokenClassification.from_pretrained("Babelscape/cner-model") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "What is the seventh tallest mountain in North America?" ner_results = nlp(example) print(ner_results) ``` ## Classes <img src="https://cdn-uploads.huggingface.co/production/uploads/65e9ccd84ce78d665a50f78b/2K3NZ79go3Zjf3qFeHO0O.png" alt="drawing" /> ## Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents and models belongs to the original copyright holders. `microsoft/deberta-v3-base` is released under the [MIT license](https://choosealicense.com/licenses/mit/).
thesullivantage/my_test_billsum_model
thesullivantage
2024-06-17T22:20:05Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-17T20:48:44Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_test_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_test_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5228 - Rouge1: 0.141 - Rouge2: 0.0489 - Rougel: 0.1157 - Rougelsum: 0.1158 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8147 | 0.1265 | 0.0394 | 0.1087 | 0.1087 | 19.0 | | No log | 2.0 | 124 | 2.6051 | 0.1355 | 0.0443 | 0.1125 | 0.1125 | 19.0 | | No log | 3.0 | 186 | 2.5396 | 0.1387 | 0.0478 | 0.1145 | 0.1145 | 19.0 | | No log | 4.0 | 248 | 2.5228 | 0.141 | 0.0489 | 0.1157 | 0.1158 | 19.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 1.13.1+cu117 - Datasets 2.20.0 - Tokenizers 0.19.1
MaziyarPanahi/mergekit-slerp-wvyefgo-GGUF
MaziyarPanahi
2024-06-17T22:18:00Z
7
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-wvyefgo", "base_model:quantized:mergekit-community/mergekit-slerp-wvyefgo" ]
text-generation
2024-06-17T21:55:13Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch - base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-wvyefgo-GGUF base_model: mergekit-community/mergekit-slerp-wvyefgo inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-wvyefgo-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-wvyefgo-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-wvyefgo](https://huggingface.co/mergekit-community/mergekit-slerp-wvyefgo) ## Description [MaziyarPanahi/mergekit-slerp-wvyefgo-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-wvyefgo-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-wvyefgo](https://huggingface.co/mergekit-community/mergekit-slerp-wvyefgo). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
John6666/wai-cute-v3-sdxl
John6666
2024-06-17T22:14:03Z
2,387
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-17T22:09:23Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/451486/wai-cute?modelVersionId=579661).
jiazhengli/Pythia-2.8B-TLDR-Iterative-SamPO
jiazhengli
2024-06-17T21:59:32Z
152
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "en", "dataset:webis/tldr-17", "arxiv:2305.18290", "base_model:EleutherAI/pythia-2.8b", "base_model:finetune:EleutherAI/pythia-2.8b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T21:09:06Z
--- model-index: - name: robinlee99/Pythia-2.8B-TLDR-Iterative-SamPO results: [] datasets: - webis/tldr-17 language: - en base_model: EleutherAI/pythia-2.8b license: apache-2.0 --- # Model Card for Pythia-2.8B-TLDR-Iterative-SamPO This repository provides a fine-tuned version of Pythia-2.8B, using our proposed [SamPO](https://github.com/LuJunru/SamPO) algorithm: Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence. ## Performance | vs. SFT | wins | len / token | | ------ | ------ | ------ | | DPO | 60.98 | 53.8 | | Iterative DPO | **73.58** | 66.65 | | Length Normed DPO | 58.13 | 47.34 | | SimPO | 33.33 | **31.9** | | Iterative SamPO | **73.58** | 49.54 | ## Evaluation Details We test our model with the same GPT-4 Win rate prompt template proposed by the [DPO paper](https://arxiv.org/pdf/2305.18290). The [sampled test set](https://huggingface.co/robinlee99/Pythia-2.8B-TLDR-Iterative-SamPO/blob/main/test_tldr.jsonl) is included in this repo. ## Training hyperparameters The following hyperparameters were used during DPO/SamPO training: - DPO beta: 0.5 - learning_rate: 1e-6 - total_train_batch_size: 128 - optimizer: AdamW with beta1 0.9, beta2 0.999 and epsilon 1e-8 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - Weight Decay: 0.0 - num_epochs: 1.0
lielbin/BabyBERTa-french_aochildes_2.5M-Masking-finetuned-squad
lielbin
2024-06-17T21:59:19Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-06-17T14:48:46Z
--- tags: - generated_from_trainer model-index: - name: BabyBERTa-french_aochildes_2.5M-Masking-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BabyBERTa-french_aochildes_2.5M-Masking-finetuned-squad This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
daswer123/xtts_ru_dvae_100h
daswer123
2024-06-17T21:51:25Z
0
0
null
[ "license:mit", "region:us" ]
null
2024-06-17T21:47:32Z
--- license: mit --- This is a DVAE filetune for xttsv2, based on the scripts presented here. https://github.com/daswer123/xtts-finetune-tests/tree/main/dvae-finetune Trained on 100h of Russian high quality speech, potentially should improve finetune quality of GPT-2 and Perceiver models. You can try to use it in xtts-finetune-webui as a custom DVAE ``` wandb: Run summary: wandb: commit_loss 0.04019 wandb: cur_step 2571 wandb: epoch 19 wandb: loss 0.10499 wandb: recon_loss 0.06481 ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a26decbfaec7e7cad01948/ViV5c14i5qLr-v9p__LTy.png)
lifiaresearch/cascade-mask-rcnn_r50_fpn_1x_coco_IPVBA-EL_crop_512
lifiaresearch
2024-06-17T21:51:11Z
0
0
null
[ "license:mit", "region:us" ]
null
2024-06-14T20:01:17Z
--- license: mit --- https://huggingface.co/spaces/CVPR/WALT/blob/main/mmdet/models/detectors/cascade_rcnn.py
abwabai/xlm-v-base-merchant-ner-v2
abwabai
2024-06-17T21:50:59Z
126
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-06-17T21:42:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TodoP/Prueba
TodoP
2024-06-17T21:49:33Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-06-17T21:49:33Z
--- license: apache-2.0 ---
dsatya6/sentiment-bert-text-gcp-v1
dsatya6
2024-06-17T21:48:06Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-17T21:48:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zJuu/qwen2-7b-1718660228
zJuu
2024-06-17T21:37:17Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "region:us" ]
null
2024-06-17T21:37:09Z
--- base_model: Qwen/Qwen2-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
animaRegem/llama-3-gaya-3-June-2024-lora-model
animaRegem
2024-06-17T21:26:55Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-17T21:26:44Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** animaRegem - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
YYYYYYibo/nash_ave_pi_with_golden_iter_2
YYYYYYibo
2024-06-17T21:26:14Z
1
0
peft
[ "peft", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:updated", "dataset:original", "base_model:alignment-handbook/zephyr-7b-sft-full", "base_model:adapter:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-06-17T19:25:07Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo base_model: alignment-handbook/zephyr-7b-sft-full datasets: - updated - original model-index: - name: nash_ave_pi_with_golden_iter_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nash_ave_pi_with_golden_iter_2 This model is a fine-tuned version of [YYYYYYibo/nash_ave_pi_with_golden_iter_1](https://huggingface.co/YYYYYYibo/nash_ave_pi_with_golden_iter_1) on the updated and the original datasets. It achieves the following results on the evaluation set: - Loss: 0.6988 - Rewards/chosen: -0.1054 - Rewards/rejected: -0.1068 - Rewards/accuracies: 0.4780 - Rewards/margins: 0.0014 - Logps/rejected: -300.1882 - Logps/chosen: -308.5959 - Logits/rejected: -2.3398 - Logits/chosen: -2.4257 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6483 | 0.64 | 100 | 0.6988 | -0.1054 | -0.1068 | 0.4780 | 0.0014 | -300.1882 | -308.5959 | -2.3398 | -2.4257 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.3.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
datek/google-gemma-2b-1718659445
datek
2024-06-17T21:24:09Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2024-06-17T21:24:05Z
--- library_name: peft base_model: google/gemma-2b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
datek/Qwen-Qwen1.5-1.8B-1718659398
datek
2024-06-17T21:23:20Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-17T21:23:18Z
--- library_name: peft base_model: Qwen/Qwen1.5-1.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
theothertom/whisper-small-indian_eng
theothertom
2024-06-17T21:20:39Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-22T12:33:35Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer metrics: - wer model-index: - name: whisper-small-indian_eng results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-indian_eng This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3169 - Wer: 10.6061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.5504 | 1.5873 | 100 | 0.4581 | 10.6061 | | 0.058 | 3.1746 | 200 | 0.2114 | 8.8154 | | 0.0126 | 4.7619 | 300 | 0.2365 | 8.1267 | | 0.0078 | 6.3492 | 400 | 0.2513 | 7.9890 | | 0.0026 | 7.9365 | 500 | 0.2596 | 7.1625 | | 0.0027 | 9.5238 | 600 | 0.2676 | 8.1267 | | 0.0031 | 11.1111 | 700 | 0.2842 | 7.3003 | | 0.0013 | 12.6984 | 800 | 0.2707 | 18.7328 | | 0.0004 | 14.2857 | 900 | 0.2814 | 13.6364 | | 0.0006 | 15.8730 | 1000 | 0.2806 | 11.1570 | | 0.0009 | 17.4603 | 1100 | 0.2906 | 11.0193 | | 0.0004 | 19.0476 | 1200 | 0.2948 | 11.1570 | | 0.0003 | 20.6349 | 1300 | 0.2984 | 10.8815 | | 0.0002 | 22.2222 | 1400 | 0.3008 | 10.8815 | | 0.0003 | 23.8095 | 1500 | 0.2992 | 10.7438 | | 0.0001 | 25.3968 | 1600 | 0.3040 | 10.8815 | | 0.0002 | 26.9841 | 1700 | 0.3088 | 10.7438 | | 0.0001 | 28.5714 | 1800 | 0.3077 | 10.6061 | | 0.0001 | 30.1587 | 1900 | 0.3098 | 10.4683 | | 0.0001 | 31.7460 | 2000 | 0.3111 | 10.4683 | | 0.0001 | 33.3333 | 2100 | 0.3119 | 10.4683 | | 0.0001 | 34.9206 | 2200 | 0.3129 | 10.4683 | | 0.0001 | 36.5079 | 2300 | 0.3142 | 10.4683 | | 0.0001 | 38.0952 | 2400 | 0.3146 | 10.4683 | | 0.0001 | 39.6825 | 2500 | 0.3153 | 10.6061 | | 0.0001 | 41.2698 | 2600 | 0.3158 | 10.6061 | | 0.0001 | 42.8571 | 2700 | 0.3162 | 10.6061 | | 0.0001 | 44.4444 | 2800 | 0.3167 | 10.6061 | | 0.0001 | 46.0317 | 2900 | 0.3169 | 10.6061 | | 0.0001 | 47.6190 | 3000 | 0.3169 | 10.6061 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.0.0 - Datasets 2.16.1 - Tokenizers 0.19.1
MaziyarPanahi/mergekit-slerp-dafvhck-GGUF
MaziyarPanahi
2024-06-17T21:11:51Z
8
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-dafvhck", "base_model:quantized:mergekit-community/mergekit-slerp-dafvhck" ]
text-generation
2024-06-17T20:49:07Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch - base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-dafvhck-GGUF base_model: mergekit-community/mergekit-slerp-dafvhck inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-dafvhck-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-dafvhck-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-dafvhck](https://huggingface.co/mergekit-community/mergekit-slerp-dafvhck) ## Description [MaziyarPanahi/mergekit-slerp-dafvhck-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-dafvhck-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-dafvhck](https://huggingface.co/mergekit-community/mergekit-slerp-dafvhck). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
karths/qwen_multi_file_merged_model
karths
2024-06-17T21:04:48Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Qwen2-7B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T20:59:53Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** karths - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Wsassi/Hermes-2-Pro-Mistral-7B_function_calling
Wsassi
2024-06-17T21:03:00Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-17T21:02:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
karths/qwenrepo_lora_model
karths
2024-06-17T20:59:51Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Qwen2-7B-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-17T20:59:38Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** karths - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lielbin/BabyBERTa-french_aochildes_2.5M-Masking-finetuned-qamr
lielbin
2024-06-17T20:55:59Z
113
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-06-17T14:13:35Z
--- tags: - generated_from_trainer model-index: - name: BabyBERTa-french_aochildes_2.5M-Masking-finetuned-qamr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BabyBERTa-french_aochildes_2.5M-Masking-finetuned-qamr This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Josephgflowers/TinyLlama-v1.1-Agent-Rag-Nerd-v1
Josephgflowers
2024-06-17T20:52:42Z
150
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T20:45:15Z
--- license: mit --- This model is a fine-tuned version of https://huggingface.co/Josephgflowers/TinyLlama-v1.1-Tiny-Agent-Test on an Agent based dataset of tool usage, function calling, programming, STEM data, NER data, address parsing, RAG, and summary. Special Thanks to https://nationtech.io/ for their generous sponorship in training this model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/TRMquVVcSYpatpeF4V2NE.png)
wdli/llama3_soda_finetuned_lora1
wdli
2024-06-17T20:52:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-17T20:49:31Z
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** wdli - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lielbin/BabyBERTa-french-Masking-finetuned-squad
lielbin
2024-06-17T20:48:50Z
115
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-06-17T13:38:18Z
--- tags: - generated_from_trainer model-index: - name: BabyBERTa-french-Masking-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BabyBERTa-french-Masking-finetuned-squad This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
haltincay/my_great_model
haltincay
2024-06-17T20:46:38Z
54
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-05T08:03:12Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: my_great_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_great_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0118 - Accuracy: 0.9973 - F1: 0.9725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 275 | 0.0150 | 0.9964 | 0.9643 | | 0.0257 | 2.0 | 550 | 0.0118 | 0.9973 | 0.9725 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
MaziyarPanahi/mergekit-slerp-aywerbb-GGUF
MaziyarPanahi
2024-06-17T20:38:41Z
76
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:Equall/Saul-Base", "base_model:HuggingFaceH4/zephyr-7b-beta", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-aywerbb", "base_model:quantized:mergekit-community/mergekit-slerp-aywerbb" ]
text-generation
2024-06-17T20:16:01Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:Equall/Saul-Base - base_model:HuggingFaceH4/zephyr-7b-beta - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-aywerbb-GGUF base_model: mergekit-community/mergekit-slerp-aywerbb inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-aywerbb-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-aywerbb-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-aywerbb](https://huggingface.co/mergekit-community/mergekit-slerp-aywerbb) ## Description [MaziyarPanahi/mergekit-slerp-aywerbb-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-aywerbb-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-aywerbb](https://huggingface.co/mergekit-community/mergekit-slerp-aywerbb). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
Casper0508/MSc_llama2_finetuned_model_updatePara
Casper0508
2024-06-17T20:35:08Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-06-17T20:35:02Z
--- license: llama2 base_model: meta-llama/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: MSc_llama2_finetuned_model_updatePara results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MSc_llama2_finetuned_model_updatePara This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 - load_in_4bit: True - load_in_8bit: False ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 16.5659 | 1.21 | 10 | 1.7444 | | 1.1543 | 2.42 | 20 | 0.8871 | | 0.7821 | 3.64 | 30 | 0.6674 | | 0.6295 | 4.85 | 40 | 0.5773 | | 0.5642 | 6.06 | 50 | 0.5303 | | 0.5251 | 7.27 | 60 | 0.4983 | | 0.4969 | 8.48 | 70 | 0.4813 | | 0.4805 | 9.7 | 80 | 0.4689 | | 0.4702 | 10.91 | 90 | 0.4585 | | 0.4651 | 12.12 | 100 | 0.4512 | | 0.4472 | 13.33 | 110 | 0.4462 | | 0.4466 | 14.55 | 120 | 0.4415 | | 0.4413 | 15.76 | 130 | 0.4385 | | 0.4398 | 16.97 | 140 | 0.4398 | | 0.4314 | 18.18 | 150 | 0.4382 | | 0.4313 | 19.39 | 160 | 0.4313 | | 0.4314 | 20.61 | 170 | 0.4294 | | 0.4189 | 21.82 | 180 | 0.4279 | | 0.4203 | 23.03 | 190 | 0.4285 | | 0.4164 | 24.24 | 200 | 0.4279 | | 0.4183 | 25.45 | 210 | 0.4268 | | 0.4107 | 26.67 | 220 | 0.4268 | | 0.4068 | 27.88 | 230 | 0.4252 | | 0.4082 | 29.09 | 240 | 0.4266 | | 0.4074 | 30.3 | 250 | 0.4299 | | 0.4025 | 31.52 | 260 | 0.4266 | | 0.4038 | 32.73 | 270 | 0.4264 | | 0.4008 | 33.94 | 280 | 0.4284 | | 0.4002 | 35.15 | 290 | 0.4279 | | 0.3937 | 36.36 | 300 | 0.4307 | | 0.4033 | 37.58 | 310 | 0.4299 | | 0.3974 | 38.79 | 320 | 0.4307 | | 0.3951 | 40.0 | 330 | 0.4312 | | 0.3948 | 41.21 | 340 | 0.4327 | | 0.3971 | 42.42 | 350 | 0.4321 | | 0.3958 | 43.64 | 360 | 0.4327 | | 0.3906 | 44.85 | 370 | 0.4333 | | 0.3977 | 46.06 | 380 | 0.4328 | | 0.3956 | 47.27 | 390 | 0.4330 | | 0.3956 | 48.48 | 400 | 0.4329 | ### Framework versions - PEFT 0.4.0 - Transformers 4.38.2 - Pytorch 2.3.0+cu121 - Datasets 2.13.1 - Tokenizers 0.15.2
FSAdrian7/finetuning-sentiment-model-3000-samples
FSAdrian7
2024-06-17T20:34:25Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-17T11:13:21Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3394 - Accuracy: 0.8733 - F1: 0.9159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
numen-tech/Smaug-Llama-3-70B-Instruct-abliterated-v3-w4a16g128asym
numen-tech
2024-06-17T20:33:59Z
0
1
null
[ "arxiv:2308.13137", "license:llama3", "region:us" ]
null
2024-06-17T19:57:49Z
--- license: llama3 --- 4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Smaug-Llama-3-70B-Instruct-abliterated-v3](https://huggingface.co/failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3).
Parth211/startup-finetune
Parth211
2024-06-17T20:30:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-17T20:29:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tales-Alves/Fine_Tuning_Bebidas
Tales-Alves
2024-06-17T20:30:49Z
13
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-05T14:06:11Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Tales-Alves - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nrshoudi/Whisper-medium-new
nrshoudi
2024-06-17T20:17:29Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:openai/whisper-medium", "base_model:adapter:openai/whisper-medium", "license:apache-2.0", "region:us" ]
null
2024-06-17T20:17:25Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: openai/whisper-medium model-index: - name: Whisper-medium-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper-medium-new This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1198 | 1.0 | 137 | 0.1716 | | 0.0915 | 2.0 | 274 | 0.1142 | | 0.0769 | 3.0 | 411 | 0.0448 | | 0.0444 | 4.0 | 548 | 0.0345 | | 0.0439 | 5.0 | 685 | 0.0209 | | 0.0175 | 6.0 | 822 | 0.0110 | | 0.0136 | 7.0 | 959 | 0.0079 | | 0.0067 | 8.0 | 1096 | 0.0031 | | 0.0024 | 9.0 | 1233 | 0.0017 | | 0.0015 | 10.0 | 1370 | 0.0014 | ### Framework versions - PEFT 0.8.0 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
MaziyarPanahi/mergekit-model_stock-qykbest-GGUF
MaziyarPanahi
2024-06-17T20:05:22Z
5
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:gagan3012/Mistral_arabic_dpo", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:Nexusflow/Starling-LM-7B-beta", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-model_stock-qykbest", "base_model:quantized:mergekit-community/mergekit-model_stock-qykbest" ]
text-generation
2024-06-17T19:42:11Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - arxiv:2403.19522 - base_model:gagan3012/Mistral_arabic_dpo - base_model:mistralai/Mistral-7B-Instruct-v0.2 - base_model:Nexusflow/Starling-LM-7B-beta - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-model_stock-qykbest-GGUF base_model: mergekit-community/mergekit-model_stock-qykbest inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-model_stock-qykbest-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-model_stock-qykbest-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-model_stock-qykbest](https://huggingface.co/mergekit-community/mergekit-model_stock-qykbest) ## Description [MaziyarPanahi/mergekit-model_stock-qykbest-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-model_stock-qykbest-GGUF) contains GGUF format model files for [mergekit-community/mergekit-model_stock-qykbest](https://huggingface.co/mergekit-community/mergekit-model_stock-qykbest). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
JapGuy/AloeBlacc
JapGuy
2024-06-17T20:04:01Z
0
0
null
[ "music", "rvc", "aloe", "blacc", "egbert", "nathaniel", "dawkins", "iii", "model", "audio-to-audio", "en", "license:openrail", "region:us" ]
audio-to-audio
2024-06-17T19:55:22Z
--- license: openrail language: - en pipeline_tag: audio-to-audio tags: - music - rvc - aloe - blacc - egbert - nathaniel - dawkins - iii - model --- ![image.jpg](https://media.wired.com/photos/593238315c4fbd732b55146e/master/pass/blacc-ft.jpg) # Aloe Blacc - Egbert Nathaniel Dawkins III [EN] (V1) # 1230 Epochs - RVC V2 - rmvpe - Titan Medium Trained on 17 minutes 48 seconds of isolated acapellas using UVR (Voc FT + Reverb HQ)
legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF
legraphista
2024-06-17T20:01:44Z
2,285
6
gguf
[ "gguf", "quantized", "GGUF", "quantization", "imat", "imatrix", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "license:other", "region:us", "conversational" ]
text-generation
2024-06-17T18:47:11Z
--- base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct inference: false library_name: gguf license: other license_link: LICENSE license_name: deepseek-license pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF _Llama.cpp imatrix quantization of deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct_ Original Model: [deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3166](https://github.com/ggerganov/llama.cpp/releases/tag/b3166) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf) | Q8_0 | 16.70GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q6_K.gguf) | Q6_K | 14.07GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_K.gguf) | Q4_K | 10.36GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K.gguf) | Q3_K | 8.13GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q2_K.gguf) | Q2_K | 6.43GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [DeepSeek-Coder-V2-Lite-Instruct.BF16.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.BF16.gguf) | BF16 | 31.42GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.FP16.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.FP16.gguf) | F16 | 31.42GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf) | Q8_0 | 16.70GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q6_K.gguf) | Q6_K | 14.07GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q5_K.gguf) | Q5_K | 11.85GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q5_K_S.gguf) | Q5_K_S | 11.14GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_K.gguf) | Q4_K | 10.36GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q4_K_S.gguf) | Q4_K_S | 9.53GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ4_NL.gguf) | IQ4_NL | 8.91GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.57GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K.gguf) | Q3_K | 8.13GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.46GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q3_K_S.gguf) | Q3_K_S | 7.49GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_M.gguf) | IQ3_M | 7.55GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_S.gguf) | IQ3_S | 7.49GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XS.gguf) | IQ3_XS | 7.12GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 6.96GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q2_K.gguf) | Q2_K | 6.43GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.Q2_K_S.gguf) | Q2_K_S | 6.46GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_M.gguf) | IQ2_M | 6.33GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_S.gguf) | IQ2_S | 6.01GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XS.gguf) | IQ2_XS | 5.97GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 5.64GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_M.gguf) | IQ1_M | 5.24GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct.IQ1_S.gguf) | IQ1_S | 4.99GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF --include "DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/DeepSeek-Coder-V2-Lite-Instruct-IMat-GGUF --include "DeepSeek-Coder-V2-Lite-Instruct.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|begin▁of▁sentence|>User: {user_prompt} Assistant: {assistant_response}<|end▁of▁sentence|>User: {next_user_prompt} ``` ### Chat template with system prompt ``` <|begin▁of▁sentence|>{system_prompt} User: {user_prompt} Assistant: {assistant_response}<|end▁of▁sentence|>User: {next_user_prompt} ``` ### Llama.cpp ``` llama.cpp/main -m DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `DeepSeek-Coder-V2-Lite-Instruct.Q8_0`) 3. Run `gguf-split --merge DeepSeek-Coder-V2-Lite-Instruct.Q8_0/DeepSeek-Coder-V2-Lite-Instruct.Q8_0-00001-of-XXXXX.gguf DeepSeek-Coder-V2-Lite-Instruct.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
blockblockblock/gpt2-medium-bpw6-exl2
blockblockblock
2024-06-17T19:53:56Z
65
0
transformers
[ "transformers", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-06-17T19:49:41Z
--- language: en license: mit --- # GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"}, {'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"}, {'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a military'}, {'generated_text': 'The man worked as a salesman in Mexico and eventually'}, {'generated_text': 'The man worked as a supervisor at the department for'}, {'generated_text': 'The man worked as a cleaner for the same corporation'}, {'generated_text': 'The man worked as a barman and was involved'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a social worker in a children'}, {'generated_text': 'The woman worked as a marketing manager, and her'}, {'generated_text': 'The woman worked as a customer service agent in a'}, {'generated_text': 'The woman worked as a cleaner for the same corporation'}, {'generated_text': 'The woman worked as a barista and was involved'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
Arbi-Houssem/speecht5_ar_tn_1.7
Arbi-Houssem
2024-06-17T19:51:35Z
79
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "ara", "dataset:Arbi-Houssem/Tunisian_dataset_STT-TTS15s_filtred1.0_Mixed", "base_model:MBZUAI/speecht5_tts_clartts_ar", "base_model:finetune:MBZUAI/speecht5_tts_clartts_ar", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-06-17T17:37:26Z
--- language: - ara license: mit base_model: MBZUAI/speecht5_tts_clartts_ar tags: - generated_from_trainer datasets: - Arbi-Houssem/Tunisian_dataset_STT-TTS15s_filtred1.0_Mixed model-index: - name: SpeechT5 TTS Tunisien results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Tunisien This model is a fine-tuned version of [MBZUAI/speecht5_tts_clartts_ar](https://huggingface.co/MBZUAI/speecht5_tts_clartts_ar) on the Tunisian_dataset_STT-TTS15s_filtred1.0_Mixed dataset. It achieves the following results on the evaluation set: - Loss: 0.4833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5487 | 53.69 | 1000 | 0.4952 | | 0.4979 | 107.38 | 2000 | 0.4845 | | 0.488 | 161.07 | 3000 | 0.4835 | | 0.4856 | 214.77 | 4000 | 0.4833 | ### Framework versions - Transformers 4.35.1 - Pytorch 2.3.1+cu121 - Datasets 2.14.7 - Tokenizers 0.14.1
mdzrg/roberta-base-squad2-pronouns
mdzrg
2024-06-17T19:49:43Z
121
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:deepset/roberta-base-squad2", "base_model:finetune:deepset/roberta-base-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2024-06-17T18:51:10Z
--- license: cc-by-4.0 base_model: deepset/roberta-base-squad2 tags: - generated_from_trainer model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
blockblockblock/gpt2-medium-bpw5.5-exl2
blockblockblock
2024-06-17T19:47:55Z
65
0
transformers
[ "transformers", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-17T19:43:41Z
--- language: en license: mit --- # GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"}, {'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"}, {'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a military'}, {'generated_text': 'The man worked as a salesman in Mexico and eventually'}, {'generated_text': 'The man worked as a supervisor at the department for'}, {'generated_text': 'The man worked as a cleaner for the same corporation'}, {'generated_text': 'The man worked as a barman and was involved'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a social worker in a children'}, {'generated_text': 'The woman worked as a marketing manager, and her'}, {'generated_text': 'The woman worked as a customer service agent in a'}, {'generated_text': 'The woman worked as a cleaner for the same corporation'}, {'generated_text': 'The woman worked as a barista and was involved'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
lielbin/BabyBERTa-french-Masking-finetuned-qamr
lielbin
2024-06-17T19:45:38Z
113
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2024-06-17T13:03:01Z
--- tags: - generated_from_trainer model-index: - name: BabyBERTa-french-Masking-finetuned-qamr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BabyBERTa-french-Masking-finetuned-qamr This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
mradermacher/Stheno-TheSpice-v1-GGUF
mradermacher
2024-06-17T19:45:00Z
32
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:lik07/Stheno-TheSpice-v1", "base_model:quantized:lik07/Stheno-TheSpice-v1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-17T03:22:48Z
--- base_model: lik07/Stheno-TheSpice-v1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/lik07/Stheno-TheSpice-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stheno-TheSpice-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
blockblockblock/gpt2-medium-bpw5-exl2
blockblockblock
2024-06-17T19:41:55Z
64
0
transformers
[ "transformers", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "5-bit", "exl2", "region:us" ]
text-generation
2024-06-17T19:37:41Z
--- language: en license: mit --- # GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"}, {'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"}, {'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a military'}, {'generated_text': 'The man worked as a salesman in Mexico and eventually'}, {'generated_text': 'The man worked as a supervisor at the department for'}, {'generated_text': 'The man worked as a cleaner for the same corporation'}, {'generated_text': 'The man worked as a barman and was involved'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a social worker in a children'}, {'generated_text': 'The woman worked as a marketing manager, and her'}, {'generated_text': 'The woman worked as a customer service agent in a'}, {'generated_text': 'The woman worked as a cleaner for the same corporation'}, {'generated_text': 'The woman worked as a barista and was involved'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
xmanii/Llama3-8b-simorgh-16bit
xmanii
2024-06-17T19:41:12Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T10:48:49Z
model-index: - name: xmanii/llama-3-8b-instruct-bnb-4bit-persian description: | **Model Information** **Developed by:** xmanii **License:** Apache-2.0 **Finetuned from model:** unsloth/llama-3-8b-instruct-bnb-4bit **Model Description** This LLaMA model was fine-tuned on a unique Persian dataset of Alpaca chat conversations, consisting of approximately 8,000 rows. Our training process utilized two H100 GPUs, completing in just under 1 hour. We leveraged the power of Unsloth and Hugging Face's TRL library to accelerate our training process by 2x. **Open-Source Contribution** This model is open-source, and we invite the community to use and build upon our work. The fine-tuned LLaMA model is designed to improve Persian conversation capabilities, and we hope it will contribute to the advancement of natural language processing in the Persian language. **Using the Model** To use this model, you can utilize the Hugging Face Transformers library. **Note:** The default usage code provided by Hugging Face is not applicable for this model. Instead, follow the example below: ```python messages = [{"from": "human", "value": prompt},] ``` Finally, use the pipeline to generate responses: ```python pipe = pipeline("text-generation", model="xmanii/Llama3-8b-simorgh-16bit") pipe(messages) ``` **Full 16-bit Merged Model** For a full 16-bit merged model, please check out xmanii/Llama3-8b-simorgh-16bit. **Future Work** We are working on quantizing the models and bringing them to ollama.
silent666/Qwen-Qwen1.5-7B-1718653221
silent666
2024-06-17T19:40:22Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-17T19:40:21Z
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
silent666/Qwen-Qwen1.5-7B-1718653025
silent666
2024-06-17T19:37:07Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-17T19:37:05Z
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
ILKT/2024-06-17_21-28-47
ILKT
2024-06-17T19:36:30Z
132
0
transformers
[ "transformers", "safetensors", "ILKT", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2024-06-17T19:35:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
blockblockblock/gpt2-medium-bpw4.8-exl2
blockblockblock
2024-06-17T19:35:53Z
64
0
transformers
[ "transformers", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-17T19:31:38Z
--- language: en license: mit --- # GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"}, {'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"}, {'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a military'}, {'generated_text': 'The man worked as a salesman in Mexico and eventually'}, {'generated_text': 'The man worked as a supervisor at the department for'}, {'generated_text': 'The man worked as a cleaner for the same corporation'}, {'generated_text': 'The man worked as a barman and was involved'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a social worker in a children'}, {'generated_text': 'The woman worked as a marketing manager, and her'}, {'generated_text': 'The woman worked as a customer service agent in a'}, {'generated_text': 'The woman worked as a cleaner for the same corporation'}, {'generated_text': 'The woman worked as a barista and was involved'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
adolford/distilbert-base-uncased_test_1
adolford
2024-06-17T19:33:59Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-17T16:55:01Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - recall model-index: - name: distilbert-base-uncased_test_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_test_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2297 - Accuracy: 0.9213 - Recall: 0.9479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 266 | 0.2077 | 0.9187 | 0.9462 | | 0.2539 | 2.0 | 532 | 0.2297 | 0.9213 | 0.9479 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
xmanii/Llama3-8b-simorgh
xmanii
2024-06-17T19:32:40Z
0
0
null
[ "safetensors", "region:us" ]
null
2024-06-17T10:47:37Z
model-index: - name: xmanii/llama-3-8b-instruct-bnb-4bit-persian description: | **Model Information** **Developed by:** xmanii **License:** Apache-2.0 **Finetuned from model:** unsloth/llama-3-8b-instruct-bnb-4bit **Model Description** This LLaMA model was fine-tuned on a unique Persian dataset of Alpaca chat conversations, consisting of approximately 8,000 rows. Our training process utilized two H100 GPUs, completing in just under 1 hour. We leveraged the power of Unsloth and Hugging Face's TRL library to accelerate our training process by 2x. ![Unsloth Made with Love](https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png) **Training Resources** * 2x H100 GPUs * Unsloth and Hugging Face's TRL library **Dataset** * Unique Persian dataset of Alpaca chat conversations * Approximately 8,000 rows **Open-Source Contribution** This model is open-source, and we invite the community to use and build upon our work. The fine-tuned LLaMA model is designed to improve Persian conversation capabilities, and we hope it will contribute to the advancement of natural language processing in the Persian language. **Using Adapters with Unsloth** To run the model with adapters, you can use the following code: ```python import torch from unsloth import FastLanguageModel from unsloth.chat_templates import get_chat_template model_save_path = "path to the download folder" #the hugging face folder path pulled. model, tokenizer = FastLanguageModel.from_pretrained( model_name=model_save_path, max_seq_length=4096, load_in_4bit=True, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference tokenizer = get_chat_template( tokenizer, chat_template="llama-3", # use the llama-3 template mapping={"role": "from", "content": "value", "user": "human", "assistant": "gpt"}, # mapping the messages. ) messages = [{"from": "human", "value": "your prompt"}]#add your prompt here as human inputs = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, # Must add for generation return_tensors="pt", ).to("cuda") outputs = model.generate(input_ids=inputs, max_new_tokens=2048, use_cache=True) response = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(response) ``` **Full 16-bit Merged Model** For a full 16-bit merged model, please check out xmanii/Llama3-8b-simorgh-16bit. **Future Work** We are working on quantizing the models and bringing them to ollama.
MaziyarPanahi/mergekit-slerp-flctqsu-GGUF
MaziyarPanahi
2024-06-17T19:31:56Z
5
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-flctqsu", "base_model:quantized:mergekit-community/mergekit-slerp-flctqsu" ]
text-generation
2024-06-17T19:07:26Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:NousResearch/Hermes-2-Pro-Mistral-7B - base_model:WizardLM/WizardMath-7B-V1.1 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-flctqsu-GGUF base_model: mergekit-community/mergekit-slerp-flctqsu inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-flctqsu-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-flctqsu-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-flctqsu](https://huggingface.co/mergekit-community/mergekit-slerp-flctqsu) ## Description [MaziyarPanahi/mergekit-slerp-flctqsu-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-flctqsu-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-flctqsu](https://huggingface.co/mergekit-community/mergekit-slerp-flctqsu). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
doga/convnext-tiny-224-finetuned-eurosat-albumentations
doga
2024-06-17T19:25:59Z
197
0
transformers
[ "transformers", "tensorboard", "safetensors", "convnext", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnext-tiny-224", "base_model:finetune:facebook/convnext-tiny-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-17T19:04:46Z
--- license: apache-2.0 base_model: facebook/convnext-tiny-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-tiny-224-finetuned-eurosat-albumentations results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9840740740740741 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-224-finetuned-eurosat-albumentations This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0610 - Accuracy: 0.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1609 | 1.0 | 190 | 0.1439 | 0.9667 | | 0.0841 | 2.0 | 380 | 0.0750 | 0.9804 | | 0.0817 | 3.0 | 570 | 0.0610 | 0.9841 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Peaky8linders/football_lora_axolotl
Peaky8linders
2024-06-17T19:25:57Z
0
0
peft
[ "peft", "llama", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "8-bit", "bitsandbytes", "region:us" ]
null
2024-06-14T21:17:59Z
--- license: llama3 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B model-index: - name: outputs/lora-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: true load_in_4bit: false strict: false datasets: - path: Peaky8linders/euro-summary-alpaca type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/lora-out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_modules_to_save: - embed_tokens - lm_head wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # outputs/lora-out This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4887 | 0.0202 | 1 | 0.4672 | | 0.0 | 0.2626 | 13 | 0.0001 | | 0.0 | 0.5253 | 26 | 0.0000 | | 0.0 | 0.7879 | 39 | 0.0000 | | 0.0 | 1.0303 | 52 | 0.0000 | | 0.0 | 1.2929 | 65 | 0.0000 | | 0.0 | 1.5556 | 78 | 0.0000 | | 0.0 | 1.8182 | 91 | 0.0000 | | 0.0 | 2.0606 | 104 | 0.0000 | | 0.0 | 2.3232 | 117 | 0.0000 | | 0.0 | 2.5859 | 130 | 0.0000 | | 0.0 | 2.8485 | 143 | 0.0000 | | 0.0 | 3.0909 | 156 | 0.0000 | | 0.0 | 3.3535 | 169 | 0.0000 | | 0.0 | 3.6162 | 182 | 0.0000 | | 0.0 | 3.8788 | 195 | 0.0000 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
etanios/mamba-130m-hf-finetuned-epitope
etanios
2024-06-17T19:25:17Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "mamba", "text-generation", "generated_from_trainer", "base_model:state-spaces/mamba-130m-hf", "base_model:finetune:state-spaces/mamba-130m-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T19:24:56Z
--- base_model: state-spaces/mamba-130m-hf tags: - generated_from_trainer model-index: - name: mamba-130m-hf-finetuned-epitope results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mamba-130m-hf-finetuned-epitope This model is a fine-tuned version of [state-spaces/mamba-130m-hf](https://huggingface.co/state-spaces/mamba-130m-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.9989 | 463 | 0.6563 | 0.6464 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
etanios/june-17-epitope-mamba
etanios
2024-06-17T19:24:55Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-17T19:19:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PragmaticPete/tinyqwen
PragmaticPete
2024-06-17T19:19:41Z
149
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T19:15:42Z
--- language: - en pipeline_tag: text-generation tags: - pretrained license: apache-2.0 --- # Qwen2-0.5B ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Performance The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc. The datasets for evaluation include: **English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot) **Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript) **Math Tasks**: GSM8K (4-shot), MATH (4-shot) **Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot) **Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot) #### Qwen2-0.5B & Qwen2-1.5B performances | Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B | | :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: | |#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B | |MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** | |MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 | |Theorem QA | - | - | - |- | 8.9 | **15.0** | |HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 | |MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 | |GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** | |MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** | |BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 | |HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 | |Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 | |ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 | |TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** | |C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** | |CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
talli96123/meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_15
talli96123
2024-06-17T19:18:42Z
218
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-17T19:16:13Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_15 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.956386292834891 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meat_calssify_fresh_crop_fixed_overlap_epoch100_V_0_15 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1718 - Accuracy: 0.9564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1001 | 1.0 | 21 | 1.0997 | 0.3614 | | 1.0637 | 2.0 | 42 | 1.0850 | 0.3863 | | 1.0464 | 3.0 | 63 | 1.0524 | 0.4766 | | 0.9569 | 4.0 | 84 | 0.9464 | 0.5763 | | 0.8778 | 5.0 | 105 | 0.8916 | 0.6168 | | 0.8396 | 6.0 | 126 | 0.8181 | 0.6573 | | 0.7752 | 7.0 | 147 | 0.8517 | 0.6168 | | 0.7765 | 8.0 | 168 | 1.0433 | 0.5140 | | 0.7283 | 9.0 | 189 | 0.9781 | 0.5452 | | 0.6714 | 10.0 | 210 | 0.6957 | 0.7165 | | 0.5872 | 11.0 | 231 | 0.6338 | 0.7352 | | 0.4924 | 12.0 | 252 | 0.5824 | 0.7757 | | 0.4441 | 13.0 | 273 | 0.7042 | 0.7040 | | 0.4818 | 14.0 | 294 | 0.4985 | 0.8100 | | 0.4477 | 15.0 | 315 | 0.5176 | 0.8100 | | 0.387 | 16.0 | 336 | 0.5820 | 0.7757 | | 0.378 | 17.0 | 357 | 0.4651 | 0.8287 | | 0.3353 | 18.0 | 378 | 0.5163 | 0.8037 | | 0.3651 | 19.0 | 399 | 0.3980 | 0.8474 | | 0.312 | 20.0 | 420 | 0.4217 | 0.8629 | | 0.2572 | 21.0 | 441 | 0.4610 | 0.8255 | | 0.25 | 22.0 | 462 | 0.4421 | 0.8349 | | 0.2325 | 23.0 | 483 | 0.4322 | 0.8193 | | 0.2384 | 24.0 | 504 | 0.4207 | 0.8380 | | 0.2295 | 25.0 | 525 | 0.4298 | 0.8411 | | 0.4004 | 26.0 | 546 | 0.4976 | 0.8224 | | 0.2136 | 27.0 | 567 | 0.3272 | 0.8723 | | 0.1851 | 28.0 | 588 | 0.3004 | 0.8941 | | 0.1513 | 29.0 | 609 | 0.3198 | 0.8785 | | 0.2132 | 30.0 | 630 | 0.3403 | 0.8879 | | 0.1704 | 31.0 | 651 | 0.4112 | 0.8692 | | 0.1639 | 32.0 | 672 | 0.3038 | 0.8941 | | 0.2028 | 33.0 | 693 | 0.6632 | 0.7601 | | 0.256 | 34.0 | 714 | 0.3475 | 0.8785 | | 0.142 | 35.0 | 735 | 0.2709 | 0.9034 | | 0.1358 | 36.0 | 756 | 0.2745 | 0.9034 | | 0.1543 | 37.0 | 777 | 0.3139 | 0.8816 | | 0.1214 | 38.0 | 798 | 0.2518 | 0.9128 | | 0.1291 | 39.0 | 819 | 0.4121 | 0.8598 | | 0.1423 | 40.0 | 840 | 0.2469 | 0.9128 | | 0.1071 | 41.0 | 861 | 0.2351 | 0.9252 | | 0.1259 | 42.0 | 882 | 0.3639 | 0.8785 | | 0.1114 | 43.0 | 903 | 0.4624 | 0.8567 | | 0.123 | 44.0 | 924 | 0.3147 | 0.8941 | | 0.0914 | 45.0 | 945 | 0.3599 | 0.8879 | | 0.1154 | 46.0 | 966 | 0.2986 | 0.9003 | | 0.1001 | 47.0 | 987 | 0.2688 | 0.9034 | | 0.0959 | 48.0 | 1008 | 0.2358 | 0.9159 | | 0.0935 | 49.0 | 1029 | 0.2724 | 0.9159 | | 0.104 | 50.0 | 1050 | 0.3857 | 0.8847 | | 0.1158 | 51.0 | 1071 | 0.3359 | 0.8910 | | 0.0766 | 52.0 | 1092 | 0.3030 | 0.8941 | | 0.1048 | 53.0 | 1113 | 0.2648 | 0.9097 | | 0.1065 | 54.0 | 1134 | 0.2859 | 0.9128 | | 0.0738 | 55.0 | 1155 | 0.3660 | 0.8910 | | 0.078 | 56.0 | 1176 | 0.2843 | 0.9221 | | 0.0755 | 57.0 | 1197 | 0.4503 | 0.8816 | | 0.1193 | 58.0 | 1218 | 0.5647 | 0.8006 | | 0.1014 | 59.0 | 1239 | 0.4011 | 0.8660 | | 0.0557 | 60.0 | 1260 | 0.3376 | 0.8941 | | 0.054 | 61.0 | 1281 | 0.2309 | 0.9283 | | 0.0674 | 62.0 | 1302 | 0.3222 | 0.9003 | | 0.0845 | 63.0 | 1323 | 0.2429 | 0.9221 | | 0.0721 | 64.0 | 1344 | 0.2247 | 0.9283 | | 0.0711 | 65.0 | 1365 | 0.3134 | 0.9097 | | 0.0881 | 66.0 | 1386 | 0.2918 | 0.9159 | | 0.0753 | 67.0 | 1407 | 0.2734 | 0.9065 | | 0.059 | 68.0 | 1428 | 0.3353 | 0.8754 | | 0.0814 | 69.0 | 1449 | 0.3093 | 0.9159 | | 0.1317 | 70.0 | 1470 | 0.1641 | 0.9439 | | 0.0539 | 71.0 | 1491 | 0.1988 | 0.9470 | | 0.0572 | 72.0 | 1512 | 0.2493 | 0.9159 | | 0.0322 | 73.0 | 1533 | 0.2045 | 0.9315 | | 0.0473 | 74.0 | 1554 | 0.2380 | 0.9315 | | 0.0478 | 75.0 | 1575 | 0.1687 | 0.9377 | | 0.0554 | 76.0 | 1596 | 0.2121 | 0.9315 | | 0.0444 | 77.0 | 1617 | 0.2172 | 0.9439 | | 0.0808 | 78.0 | 1638 | 0.3581 | 0.8910 | | 0.0522 | 79.0 | 1659 | 0.2155 | 0.9408 | | 0.0402 | 80.0 | 1680 | 0.2204 | 0.9283 | | 0.0387 | 81.0 | 1701 | 0.1438 | 0.9564 | | 0.0294 | 82.0 | 1722 | 0.3094 | 0.9221 | | 0.0449 | 83.0 | 1743 | 0.2850 | 0.9128 | | 0.029 | 84.0 | 1764 | 0.3040 | 0.9128 | | 0.0419 | 85.0 | 1785 | 0.1831 | 0.9439 | | 0.0297 | 86.0 | 1806 | 0.2211 | 0.9221 | | 0.0382 | 87.0 | 1827 | 0.2203 | 0.9346 | | 0.0524 | 88.0 | 1848 | 0.2093 | 0.9377 | | 0.0524 | 89.0 | 1869 | 0.2195 | 0.9252 | | 0.0446 | 90.0 | 1890 | 0.2358 | 0.9377 | | 0.0423 | 91.0 | 1911 | 0.2129 | 0.9283 | | 0.0434 | 92.0 | 1932 | 0.2199 | 0.9315 | | 0.0429 | 93.0 | 1953 | 0.1954 | 0.9470 | | 0.0302 | 94.0 | 1974 | 0.1379 | 0.9564 | | 0.046 | 95.0 | 1995 | 0.1609 | 0.9502 | | 0.0247 | 96.0 | 2016 | 0.1978 | 0.9315 | | 0.0289 | 97.0 | 2037 | 0.1872 | 0.9439 | | 0.0452 | 98.0 | 2058 | 0.2132 | 0.9377 | | 0.0308 | 99.0 | 2079 | 0.1592 | 0.9377 | | 0.0274 | 100.0 | 2100 | 0.1718 | 0.9564 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
blockblockblock/gpt2-medium-bpw4.2-exl2
blockblockblock
2024-06-17T19:17:56Z
63
0
transformers
[ "transformers", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-17T19:13:45Z
--- language: en license: mit --- # GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"}, {'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"}, {'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a military'}, {'generated_text': 'The man worked as a salesman in Mexico and eventually'}, {'generated_text': 'The man worked as a supervisor at the department for'}, {'generated_text': 'The man worked as a cleaner for the same corporation'}, {'generated_text': 'The man worked as a barman and was involved'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a social worker in a children'}, {'generated_text': 'The woman worked as a marketing manager, and her'}, {'generated_text': 'The woman worked as a customer service agent in a'}, {'generated_text': 'The woman worked as a cleaner for the same corporation'}, {'generated_text': 'The woman worked as a barista and was involved'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
mradermacher/Umbral-v0.4-3-GGUF
mradermacher
2024-06-17T19:17:18Z
4
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mergekit-community/Umbral-v0.4-3", "base_model:quantized:mergekit-community/Umbral-v0.4-3", "endpoints_compatible", "region:us" ]
null
2024-06-17T00:25:43Z
--- base_model: mergekit-community/Umbral-v0.4-3 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mergekit-community/Umbral-v0.4-3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-3-GGUF/resolve/main/Umbral-v0.4-3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
silent666/google-gemma-2b-1718651715
silent666
2024-06-17T19:15:17Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "region:us" ]
null
2024-06-17T19:15:15Z
--- base_model: google/gemma-2b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Masioki/fusion_asrtbsc_distilbert-uncased-best
Masioki
2024-06-17T19:15:14Z
36
0
transformers
[ "transformers", "tensorboard", "safetensors", "fusion-cross-attention-sentence-classifier", "generated_from_trainer", "en", "dataset:asapp/slue-phase-2", "model-index", "endpoints_compatible", "region:us" ]
null
2024-06-11T23:11:26Z
--- tags: - generated_from_trainer model-index: - name: fusion_asrtbsc_distilbert-uncased-best results: - task: type: dialogue act classification dataset: name: asapp/slue-phase-2 type: hvb metrics: - name: F1 macro E2E type: F1 macro value: 72.22 - name: F1 macro GT type: F1 macro value: 72.29 datasets: - asapp/slue-phase-2 language: - en metrics: - f1-macro --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fusion_asrtbsc_distilbert-uncased-best ASR transcripts with prosody encoding and ASR encoding residual cross attention fusion multi-label DAC ## Model description ASR encoder: [Whisper small](https://huggingface.co/openai/whisper-small) encoder Prosody encoder: 2 layer transformer encoder with initial dense projection Backbone: [DistilBert uncased](https://huggingface.co/distilbert/distilbert-base-uncased) Fusion: 2 residual cross attention fusion layers (F_asr x F_text and F_prosody x F_text) with dense layer on top Pooling: Self attention Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween ## Training and evaluation data Trained on ASR transcripts. Evaluated on ground truth (GT) and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts (E2E). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00043 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
silent666/Qwen-Qwen1.5-1.8B-1718651683
silent666
2024-06-17T19:14:44Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-17T19:14:43Z
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
silent666/Qwen-Qwen1.5-0.5B-1718651642
silent666
2024-06-17T19:14:04Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-17T19:14:02Z
--- base_model: Qwen/Qwen1.5-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Masioki/fusion_gttbsc_distilbert-uncased-ft
Masioki
2024-06-17T19:12:05Z
34
0
transformers
[ "transformers", "tensorboard", "safetensors", "fusion-cross-attention-sentence-classifier", "generated_from_trainer", "en", "dataset:asapp/slue-phase-2", "model-index", "endpoints_compatible", "region:us" ]
null
2024-06-09T23:21:44Z
--- tags: - generated_from_trainer model-index: - name: fusion_gttbsc_distilbert-uncased-ft results: - task: type: dialogue act classification dataset: name: asapp/slue-phase-2 type: hvb metrics: - name: F1 macro E2E type: F1 macro value: TBA - name: F1 macro GT type: F1 macro value: TBA datasets: - asapp/slue-phase-2 language: - en metrics: - f1-macro --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fusion_gttbsc_distilbert-uncased-ft Ground truth text with prosody encoding and ASR encoding residual cross attention fusion multi-label DAC ## Model description ASR encoder: [Whisper small](https://huggingface.co/openai/whisper-small) encoder Prosody encoder: 2 layer transformer encoder with initial dense projection Backbone: [DistilBert uncased](https://huggingface.co/distilbert/distilbert-base-uncased) Fusion: 2 residual cross attention fusion layers (F_asr x F_text and F_prosody x F_text) with dense layer on top Pooling: Self attention Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween ## Training and evaluation data Trained on ground truth. Evaluated on ground truth (GT) and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts (E2E). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0007 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
blockblockblock/gpt2-medium-bpw4-exl2
blockblockblock
2024-06-17T19:12:00Z
64
0
transformers
[ "transformers", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-06-17T19:07:44Z
--- language: en license: mit --- # GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"}, {'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"}, {'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a military'}, {'generated_text': 'The man worked as a salesman in Mexico and eventually'}, {'generated_text': 'The man worked as a supervisor at the department for'}, {'generated_text': 'The man worked as a cleaner for the same corporation'}, {'generated_text': 'The man worked as a barman and was involved'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a social worker in a children'}, {'generated_text': 'The woman worked as a marketing manager, and her'}, {'generated_text': 'The woman worked as a customer service agent in a'}, {'generated_text': 'The woman worked as a cleaner for the same corporation'}, {'generated_text': 'The woman worked as a barista and was involved'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
Masioki/gttbsc_distilbert-ft
Masioki
2024-06-17T19:10:31Z
35
0
transformers
[ "transformers", "tensorboard", "safetensors", "single-embedding-sentence-classifier", "generated_from_trainer", "en", "dataset:asapp/slue-phase-2", "model-index", "endpoints_compatible", "region:us" ]
null
2024-06-09T19:06:32Z
--- tags: - generated_from_trainer model-index: - name: gttbsc_distilbert-ft results: - task: type: dialogue act classification dataset: name: asapp/slue-phase-2 type: hvb metrics: - name: F1 macro E2E type: F1 macro value: 65.60 - name: F1 macro GT type: F1 macro value: 71.82 datasets: - asapp/slue-phase-2 language: - en metrics: - f1-macro --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gttbsc_distilbert-ft Ground truth text multi-label DAC. Fined tuned using LoRa. ## Model description Backbone: [DistilBert uncased](https://huggingface.co/distilbert/distilbert-base-uncased) Pooling: Self attention Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween ## Training and evaluation data Trained on ground truth. Evaluated on ground truth (GT) and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts (E2E). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00043 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B-GGUF
Casual-Autopsy
2024-06-17T19:08:48Z
2
3
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B", "base_model:quantized:Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-17T19:07:54Z
--- library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B --- # Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B-GGUF This model was converted to GGUF format from [`Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B`](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B-GGUF --hf-file l3-umbral-mind-rp-v0.8-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B-GGUF --hf-file l3-umbral-mind-rp-v0.8-8b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B-GGUF --hf-file l3-umbral-mind-rp-v0.8-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo Casual-Autopsy/L3-Umbral-Mind-RP-v0.8-8B-GGUF --hf-file l3-umbral-mind-rp-v0.8-8b-q5_k_m.gguf -c 2048 ```
blockblockblock/gpt2-medium-bpw3.7-exl2
blockblockblock
2024-06-17T19:05:59Z
64
0
transformers
[ "transformers", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-17T19:01:47Z
--- language: en license: mit --- # GPT-2 Medium ## Model Details **Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"}, {'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"}, {'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = GPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') model = TFGPT2Model.from_pretrained('gpt2-medium') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-medium') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a military'}, {'generated_text': 'The man worked as a salesman in Mexico and eventually'}, {'generated_text': 'The man worked as a supervisor at the department for'}, {'generated_text': 'The man worked as a cleaner for the same corporation'}, {'generated_text': 'The man worked as a barman and was involved'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a social worker in a children'}, {'generated_text': 'The woman worked as a marketing manager, and her'}, {'generated_text': 'The woman worked as a customer service agent in a'}, {'generated_text': 'The woman worked as a cleaner for the same corporation'}, {'generated_text': 'The woman worked as a barista and was involved'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
Sanjay19tsh/summ_Llama3_model
Sanjay19tsh
2024-06-17T19:02:36Z
0
1
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-17T19:02:28Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Sanjay19tsh - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
silent666/Qwen-Qwen1.5-7B-1718650938
silent666
2024-06-17T19:02:22Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-17T19:02:18Z
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Masioki/prosody_gttbsc_distilbert-uncased-energy
Masioki
2024-06-17T19:02:00Z
38
0
transformers
[ "transformers", "tensorboard", "safetensors", "cross-attention-sentence-classifier", "generated_from_trainer", "en", "dataset:asapp/slue-phase-2", "model-index", "endpoints_compatible", "region:us" ]
null
2024-06-06T22:09:14Z
--- tags: - generated_from_trainer model-index: - name: prosody_gttbsc_distilbert-uncased-energy results: - task: type: dialogue act classification dataset: name: asapp/slue-phase-2 type: hvb metrics: - name: F1 macro E2E type: F1 macro value: 65.75 - name: F1 macro GT type: F1 macro value: 71.95 datasets: - asapp/slue-phase-2 language: - en metrics: - f1-macro --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prosody_gttbsc_distilbert-uncased-energy Ground truth text with prosody encoding residual cross attention multi-label DAC ## Model description Prosody encoder: 2 layer transformer encoder with initial dense projection Backbone: [DistilBert uncased](https://huggingface.co/distilbert/distilbert-base-uncased) Pooling: Self attention Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween ## Training and evaluation data Trained on ground truth. Evaluated on ground truth (GT) and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts (E2E). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
silent666/Qwen-Qwen1.5-7B-1718650830
silent666
2024-06-17T19:00:32Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-17T19:00:31Z
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1