modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-05 06:27:37
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-05 06:27:15
card
stringlengths
11
1.01M
ckandemir/whisper-tiny
ckandemir
2024-02-16T10:08:28Z
62
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-31T00:46:15Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: Whisper Tiny-Handy-Pretty - ckandemir results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train[450:] args: en-US metrics: - name: Wer type: wer value: 0.3116883116883117 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny-Handy-Pretty - ckandemir This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.5197 - Wer Ortho: 31.6471 - Wer: 0.3117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.8427 | 1.32 | 50 | 0.5401 | 35.9655 | 0.3566 | | 0.1982 | 2.63 | 100 | 0.5179 | 35.5336 | 0.3501 | | 0.0531 | 3.95 | 150 | 0.5197 | 31.6471 | 0.3117 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
Hamzaharman/imageclassification
Hamzaharman
2024-02-16T10:07:37Z
35
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-16T00:34:25Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: imageclassification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.59375 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # imageclassification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1467 - Accuracy: 0.5938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 1.8113 | 0.35 | | No log | 2.0 | 80 | 1.5533 | 0.3937 | | No log | 3.0 | 120 | 1.4193 | 0.4688 | | No log | 4.0 | 160 | 1.3237 | 0.5687 | | No log | 5.0 | 200 | 1.2989 | 0.4938 | | No log | 6.0 | 240 | 1.2901 | 0.5 | | No log | 7.0 | 280 | 1.2380 | 0.5625 | | No log | 8.0 | 320 | 1.1773 | 0.6125 | | No log | 9.0 | 360 | 1.2149 | 0.5625 | | No log | 10.0 | 400 | 1.2280 | 0.5312 | | No log | 11.0 | 440 | 1.2326 | 0.5625 | | No log | 12.0 | 480 | 1.1488 | 0.5875 | | 1.0601 | 13.0 | 520 | 1.1597 | 0.6062 | | 1.0601 | 14.0 | 560 | 1.1953 | 0.5563 | | 1.0601 | 15.0 | 600 | 1.2011 | 0.55 | | 1.0601 | 16.0 | 640 | 1.2294 | 0.55 | | 1.0601 | 17.0 | 680 | 1.1972 | 0.5687 | | 1.0601 | 18.0 | 720 | 1.3043 | 0.525 | | 1.0601 | 19.0 | 760 | 1.2796 | 0.525 | | 1.0601 | 20.0 | 800 | 1.1781 | 0.5813 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
hs4jk24erfc/test_fine_tuned_model
hs4jk24erfc
2024-02-16T10:04:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T10:03:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
maramzarkaoui/facebook1
maramzarkaoui
2024-02-16T10:04:18Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T10:04:16Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_en.layer1_4_torch.bfloat16_16_32_0.01_8_0.0002
ferrazzipietro
2024-02-16T10:03:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T10:03:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
warmestman/whisper-large-v3-mn-1
warmestman
2024-02-16T10:02:54Z
7
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "mn", "dataset:mozilla-foundation/common_voice_16_1", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-15T11:03:21Z
--- language: - mn license: apache-2.0 base_model: openai/whisper-large-v3 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_16_1 metrics: - wer model-index: - name: 'Whisper Small MN - Ankhbayasgalan Davaadorj ' results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 16.1 type: mozilla-foundation/common_voice_16_1 config: mn split: test args: 'config: mn, split: test+validation' metrics: - name: Wer type: wer value: 67.84162771514984 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small MN - Ankhbayasgalan Davaadorj This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: - Loss: 0.5096 - Wer: 67.8416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0832 | 3.94 | 1000 | 0.3988 | 73.6211 | | 0.0051 | 7.87 | 2000 | 0.4563 | 66.0654 | | 0.0004 | 11.81 | 3000 | 0.5096 | 67.8416 | ### Framework versions - Transformers 4.37.2 - Pytorch 1.12.1+cu116 - Datasets 2.17.0 - Tokenizers 0.15.2
anjith672/dillu_high_train
anjith672
2024-02-16T10:01:58Z
1
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-02-16T07:31:37Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: dil tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was trained.
sadhaklal/bert-base-uncased-finetuned-mrpc-v2
sadhaklal
2024-02-16T09:59:17Z
92
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-14T10:14:44Z
--- license: apache-2.0 datasets: - glue language: - en metrics: - accuracy - f1 library_name: transformers pipeline_tag: text-classification widget: - text: The company didn 't detail the costs of the replacement and repairs . [SEP] But company officials expect the costs of the replacement work to run into the millions of dollars . example_title: not_equivalent - text: According to the federal Centers for Disease Control and Prevention ( news - web sites ) , there were 19 reported cases of measles in the United States in 2002 . [SEP] The Centers for Disease Control and Prevention said there were 19 reported cases of measles in the United States in 2002 . example_title: equivalent --- # bert-base-uncased-finetuned-mrpc-v2 BERT (`"bert-base-uncased"`) finetuned on MRPC (Microsoft Research Paraphrase Corpus). The model predicts whether two sentences are semantically equivalent. It pertains to section 4 of chapter 3 of the Hugging Face "NLP Course" (https://huggingface.co/learn/nlp-course/chapter3/4). It was trained using a custom PyTorch loop with Hugging Face Accelerate. Code: https://github.com/sambitmukherjee/huggingface-notebooks/blob/main/course/en/chapter3/section4.ipynb Experiment tracking: https://wandb.ai/sadhaklal/bert-base-uncased-finetuned-mrpc-v2 ## Usage ``` from transformers import pipeline classifier = pipeline("text-classification", model="sadhaklal/bert-base-uncased-finetuned-mrpc-v2") sentence1 = "A tropical storm rapidly developed in the Gulf of Mexico Sunday and was expected to hit somewhere along the Texas or Louisiana coasts by Monday night ." sentence2 = "A tropical storm rapidly developed in the Gulf of Mexico on Sunday and could have hurricane-force winds when it hits land somewhere along the Louisiana coast Monday night ." sentence_pair = sentence1 + " [SEP] " + sentence2 print(classifier(sentence_pair)) sentence1 = "The settling companies would also assign their possible claims against the underwriters to the investor plaintiffs , he added ." sentence2 = "Under the agreement , the settling companies will also assign their potential claims against the underwriters to the investors , he added ." sentence_pair = sentence1 + " [SEP] " + sentence2 print(classifier(sentence_pair)) ``` ## Dataset From the dataset page: > The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. Examples: https://huggingface.co/datasets/glue/viewer/mrpc ## Metrics Accuracy on the 'validation' split of MRPC: 0.875 F1 on the 'validation' split of MRPC: 0.9128
logicker/SkkuDS-DPO-72B-v1
logicker
2024-02-16T09:51:54Z
52
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained, dpo", "conversational", "en", "arxiv:2309.16609", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T08:14:26Z
--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained, dpo --- # Qwen1.5-72B ## DPO Tuning - Dataset: Intel/orca_dpo_pairs ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2'. ``` ## Citation ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
shanhy/xlm-roberta-base_lr5e-06_seed42_basic_original_esp-kin-eng_train
shanhy
2024-02-16T09:51:53Z
100
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T09:50:50Z
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base_lr5e-06_seed42_basic_original_esp-kin-eng_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base_lr5e-06_seed42_basic_original_esp-kin-eng_train This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0293 - Spearman Corr: 0.7251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.63 | 200 | 0.0280 | 0.6016 | | 0.062 | 3.27 | 400 | 0.0275 | 0.6391 | | 0.0304 | 4.9 | 600 | 0.0293 | 0.6641 | | 0.0245 | 6.53 | 800 | 0.0266 | 0.6888 | | 0.0217 | 8.16 | 1000 | 0.0304 | 0.6900 | | 0.0217 | 9.8 | 1200 | 0.0286 | 0.7016 | | 0.0198 | 11.43 | 1400 | 0.0304 | 0.7080 | | 0.0181 | 13.06 | 1600 | 0.0277 | 0.7103 | | 0.0164 | 14.69 | 1800 | 0.0285 | 0.7086 | | 0.0154 | 16.33 | 2000 | 0.0286 | 0.7233 | | 0.0154 | 17.96 | 2200 | 0.0259 | 0.7209 | | 0.0144 | 19.59 | 2400 | 0.0282 | 0.7160 | | 0.0137 | 21.22 | 2600 | 0.0300 | 0.7168 | | 0.0129 | 22.86 | 2800 | 0.0300 | 0.7215 | | 0.0123 | 24.49 | 3000 | 0.0288 | 0.7262 | | 0.0124 | 26.12 | 3200 | 0.0285 | 0.7256 | | 0.0124 | 27.76 | 3400 | 0.0291 | 0.7220 | | 0.0119 | 29.39 | 3600 | 0.0293 | 0.7251 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_en.layer1_4_torch.bfloat16_16_32_0.01_4_0.0002
ferrazzipietro
2024-02-16T09:51:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T09:51:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hewonty/longformer-ner-finetuned-pii
hewonty
2024-02-16T09:51:32Z
91
1
transformers
[ "transformers", "tensorboard", "safetensors", "longformer", "token-classification", "generated_from_trainer", "base_model:allenai/longformer-base-4096", "base_model:finetune:allenai/longformer-base-4096", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-02-15T15:02:46Z
--- license: apache-2.0 base_model: allenai/longformer-base-4096 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: longformer-ner-finetuned-pii results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longformer-ner-finetuned-pii This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0034 - Precision: 0.9772 - Recall: 0.9879 - F1: 0.9825 - Accuracy: 0.9993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0063 | 1.0 | 1324 | 0.0050 | 0.9613 | 0.9842 | 0.9726 | 0.9990 | | 0.0037 | 2.0 | 2648 | 0.0038 | 0.9735 | 0.9873 | 0.9803 | 0.9992 | | 0.002 | 3.0 | 3972 | 0.0034 | 0.9772 | 0.9879 | 0.9825 | 0.9993 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
qkrwnstj/lora_mid_journal_1
qkrwnstj
2024-02-16T09:50:39Z
3
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-16T08:43:31Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - qkrwnstj/lora_mid_journal_1 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the qkrwnstj/mid-captioning-dataset dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
mindlywork/BandW2Char
mindlywork
2024-02-16T09:46:24Z
34
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:cc", "region:us" ]
text-to-image
2024-02-16T09:43:27Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: BandW2Char output: url: images/l1DJDaP05ESt6MK9MtxN5bJj (1).png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: BandW2Char license: cc --- # BandW2Char <Gallery /> ## Model description BandW2Char ## Trigger words You should use `BandW2Char` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/dasdsff/BandW2Char/tree/main) them in the Files & versions tab.
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_en.layer1_4_torch.bfloat16_16_32_0.01_2_0.002
ferrazzipietro
2024-02-16T09:45:32Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T09:45:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eren23/sdxl-zavy-animemix-copax-jibmix-segmoe
eren23
2024-02-16T09:37:42Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "segmoe", "merge", "moe", "sdxl", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-16T09:13:57Z
--- library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - segmoe - merge - moe - sdxl --- This model is a segmoe merge of 4 models from civitAI: https://civitai.com/models/269232/aam-xl-anime-mix?modelVersionId=303526 https://civitai.com/models/194768/jib-mix-realistic-xl?modelVersionId=335740 https://civitai.com/models/118111/copax-timelessxl-sdxl10?modelVersionId=344540 https://civitai.com/models/119229/zavychromaxl?modelVersionId=320428 Merged using the great project at: https://github.com/segmind/segmoe To do something similar you can either follow the guide in readme or you can follow this blogpost: https://huggingface.co/blog/segmoe The setting I used: ``` base_model: https://civitai.com/api/download/models/320428 num_experts: 4 moe_layers: all num_experts_per_tok: 2 type: sdxl experts: - source_model: https://civitai.com/api/download/models/320428 positive_prompt: "RAW photo, photorealistic, film grain, candid camera, color graded cinematic, eye catchlights, atmospheric lighting, macro shot, skin pores, imperfections, natural." negative_prompt: " (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck" - source_model: https://civitai.com/api/download/models/303526?type=Model&format=SafeTensor&size=full&fp=fp16 positive_prompt: "1girl, mecha suit, samurai face mask, menpo, upper body, underboob, portrait, white orange armor, blonde shimmering hair, 8K, RAW, best quality, masterpiece, ultra high res, colorful, (medium wide shot), (dynamic perspective), sharp focus , (depth of field, bokeh:1.3), extremely detailed eyes and face, beautiful detailed eyes,large breasts,black gold, trimmed gear,In a futuristic weapons factory, ((masterpiece, best quality)), niji, from side, upper body, hips, anime style" negative_prompt: "(low quality, worst quality:1.4), negativeXL_D, cgi, text, signature, watermark, extra limbs" - source_model: https://civitai.com/api/download/models/335740?type=Model&format=SafeTensor&size=full&fp=fp32 positive_prompt: "cinematic photo Documentery Photo a brown bear in the Alaskan wilderness, discovery magazine, cinematic photorealistic, 8k uhd natural lighting, raw, rich, intricate details, key visual, atmospheric lighting, 35mm photograph, film, bokeh, professional, 4k, highly detailed . 35mm photograph, film, bokeh, professional, 4k, highly detailed" negative_prompt: "drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly, ((bokeh)) Deviantart, Deviantart, jpeg , (worst quality, low quality, normal quality, lowres, low details, oversaturated, undersaturated, overexposed, underexposed, grayscale, bw, bad photo, bad photography, bad art:1.4), (watermark, signature, text font, username, error, logo, words, letters, digits, autograph, trademark, name:1.2), (blur, blurry, grainy), morbid, ugly, asymmetrical, mutated malformed, mutilated, poorly lit, bad shadow, draft, cropped, out of frame, cut off, censored, jpeg artifacts, out of focus, glitch, duplicate, (airbrushed, cartoon, anime, semi-realistic, cgi, render, blender, digital art, manga, amateur:1.3), (3D ,3D Game, 3D Game Scene, 3D Character:1.1), (bad hands, bad anatomy, bad body, bad face, bad teeth, bad arms, bad legs, deformities:1.3)" - source_model: https://civitai.com/api/download/models/344540 positive_prompt: "a photo portrait of an aztec demon shaman zombie in the style of apocalypto" negative_prompt: "(worst quality, low quality, normal quality, lowres, low details, grayscale, bw), painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label, long neck" ``` # Usage !pip install -U segmoe diffusers transformers from segmoe import SegMoEPipeline pipeline = SegMoEPipeline("eren23/sdxl-zavy-animemix-copax-jibmix-segmoe", device="cuda") prompt = "fantastic land canvas, knight cat standing next to a purple medieval village wall" negative_prompt = "nsfw, bad quality, worse quality" img = pipeline( prompt=prompt, negative_prompt=negative_prompt, height=512, width=512, num_inference_steps=30, guidance_scale=7.5, ).images[0] img.save("image.png") # Example Images ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6236052a12982296495f4674/uaPwC6ZLPSJVsKToni9qc.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6236052a12982296495f4674/aaLvgR2QB0o9EnPAYWO6B.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6236052a12982296495f4674/EOxG3c4C0NNWDZxXe3bGG.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6236052a12982296495f4674/mXlVAJYEEcuG6V_TXBN8Q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6236052a12982296495f4674/um6kxh83Nxo7NcGvDoKme.png)
leenag/check-malayalam
leenag
2024-02-16T09:35:45Z
60
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-16T08:15:42Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: check-malayalam results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # check-malayalam This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1187 - Wer: 54.0682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7411 | 0.46 | 100 | 0.5833 | 100.1244 | | 0.2408 | 0.93 | 200 | 0.2089 | 72.5056 | | 0.1299 | 1.39 | 300 | 0.1441 | 61.8811 | | 0.1046 | 1.85 | 400 | 0.1250 | 55.8845 | | 0.0763 | 2.31 | 500 | 0.1187 | 54.0682 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.1+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
thrunlab/Mistral_Sparse_refined_web_50p_2024-02-16
thrunlab
2024-02-16T09:28:06Z
6
0
transformers
[ "transformers", "safetensors", "sparse_mistral", "text-generation", "generated_from_trainer", "custom_code", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-02-16T08:22:09Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: Mistral_Sparse_refined_web_50p_2024-02-16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral_Sparse_refined_web_50p_2024-02-16 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 0 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 3 - total_train_batch_size: 9 - total_eval_batch_size: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5975 | 0.01 | 25 | 2.6362 | | 2.3082 | 0.01 | 50 | 2.5659 | | 2.4024 | 0.02 | 75 | 2.5151 | | 2.3358 | 0.02 | 100 | 2.4817 | | 2.2267 | 0.03 | 125 | 2.4660 | | 2.271 | 0.04 | 150 | 2.4456 | | 2.1709 | 0.04 | 175 | 2.4413 | | 2.2549 | 0.05 | 200 | 2.4306 | | 2.2536 | 0.05 | 225 | 2.4243 | | 2.2234 | 0.06 | 250 | 2.4212 | | 2.2516 | 0.07 | 275 | 2.4202 | | 2.2827 | 0.07 | 300 | 2.4146 | | 2.1774 | 0.08 | 325 | 2.4156 | | 2.278 | 0.08 | 350 | 2.4094 | | 2.204 | 0.09 | 375 | 2.4088 | | 2.1987 | 0.1 | 400 | 2.4073 | | 2.1985 | 0.1 | 425 | 2.4041 | | 2.2198 | 0.11 | 450 | 2.4069 | | 2.2555 | 0.11 | 475 | 2.4014 | | 2.1567 | 0.12 | 500 | 2.4017 | | 2.2918 | 0.13 | 525 | 2.3998 | | 2.2559 | 0.13 | 550 | 2.3959 | | 2.2234 | 0.14 | 575 | 2.3978 | | 2.2001 | 0.14 | 600 | 2.3944 | | 2.1409 | 0.15 | 625 | 2.3957 | | 2.2034 | 0.16 | 650 | 2.3981 | | 2.1863 | 0.16 | 675 | 2.3941 | | 2.2372 | 0.17 | 700 | 2.3936 | | 2.2438 | 0.17 | 725 | 2.3953 | | 2.2172 | 0.18 | 750 | 2.3943 | | 2.1917 | 0.19 | 775 | 2.3921 | | 2.1137 | 0.19 | 800 | 2.3912 | | 2.0766 | 0.07 | 825 | 2.3935 | | 2.1926 | 0.08 | 850 | 2.3913 | | 2.2948 | 0.08 | 875 | 2.3915 | | 2.1349 | 0.08 | 900 | 2.3917 | | 2.2446 | 0.08 | 925 | 2.3876 | | 2.253 | 0.09 | 950 | 2.3880 | | 2.0729 | 0.09 | 975 | 2.3890 | | 2.1965 | 0.09 | 1000 | 2.3873 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
ferrazzipietro/Mistral-7B-Instruct-v0.2_adapters_en.layer1_4_torch.bfloat16_16_32_0.05_8_0.0002
ferrazzipietro
2024-02-16T09:27:15Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T09:27:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Azma-AI/azma-open-hermes-2.5-mistral-7B-conversation-agent-v1
Azma-AI
2024-02-16T09:23:44Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:Azma-AI/azma-final-conversation-dataset", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T08:54:19Z
--- library_name: transformers datasets: - Azma-AI/azma-final-conversation-dataset --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
seatond/EXTRACTION_rank64_lr2.2e-05_target7_epochs50step_laplha128_batch1_gradacc4
seatond
2024-02-16T09:21:20Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "region:us" ]
null
2024-02-16T09:02:17Z
--- library_name: peft base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1.dev0
Menouar/saqr-7b-merged
Menouar
2024-02-16T09:20:31Z
233
1
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "saqr-7b-instrcut", "Pytorch", "conversational", "custom_code", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:openbmb/UltraFeedback", "dataset:gsm8k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T09:10:53Z
--- library_name: transformers tags: - saqr-7b-instrcut - Pytorch license: apache-2.0 datasets: - HuggingFaceH4/ultrachat_200k - openbmb/UltraFeedback - gsm8k language: - en pipeline_tag: text-generation --- # saqr-7b-merged This model is a merged version of [**saqr-7b-instruct**](https://huggingface.co/Menouar/saqr-7b-instruct) with LoRA Adapters. <img src="https://huggingface.co/Menouar/saqr-7b-instruct/resolve/main/saqr.jpg" alt="Saqr Logo" width="800" style="margin-left:auto; margin-right:auto; display:block;"/>
Reshphil/lab1_finetuning
Reshphil
2024-02-16T09:16:48Z
118
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-15T09:05:21Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.88398487672078 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8556 - Bleu: 52.8840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
SeyedAli/Image-Arousal
SeyedAli
2024-02-16T09:14:59Z
179
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-12T09:10:19Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-Arousal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Image-Arousal This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the custom dataset. It achieves the following results on the evaluation set: - Loss: 0.8522 - Accuracy: 0.6294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9023 | 0.78 | 100 | 0.8522 | 0.6294 | | 0.5376 | 1.56 | 200 | 0.8592 | 0.6686 | | 0.2473 | 2.34 | 300 | 0.9559 | 0.6510 | | 0.0691 | 3.12 | 400 | 1.1399 | 0.6275 | | 0.0821 | 3.91 | 500 | 1.2060 | 0.6392 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
muyuanma/marian-finetuned-kde4-en-to-fr
muyuanma
2024-02-16T09:12:40Z
118
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-16T07:12:00Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 53.03371127884619 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8558 - Bleu: 53.0337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
p1atdev/tokenizer_test_1
p1atdev
2024-02-16T09:11:02Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-15T06:19:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Iggnis/Light-Novel-fine-tuned-mistral
Iggnis
2024-02-16T09:09:05Z
0
1
null
[ "region:us" ]
null
2024-02-16T07:39:23Z
# Light Novel fine tuned with Mistral *This model was created for a course in Université Savoie Mont Blanc* We find tuned this model ``` mistralai/Mixtral-8x7B-Instruct-v0.1 ``` And we added to it, light novel text to make it more UwU ``` alpindale/light-novels ```
rAIfle/ohno-8x7B-exl2-rpcal
rAIfle
2024-02-16T08:54:25Z
0
0
null
[ "region:us" ]
null
2024-02-15T13:28:29Z
same deal as always, 200 rows, 8192 tokens, PIPPA, yadayada. home-merged, check [rAIfle/ohno-8x7B-fp16](https://huggingface.co/rAIfle/ohno-8x7B-fp16) for source. no clue what prompt to use, nor what settings. model deemed fit for human consumption, at least, though it is pretty braindamaged.
rAIfle/ohno-8x7B-fp16
rAIfle
2024-02-16T08:53:58Z
8
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:Envoid/Mixtral-Instruct-ITR-8x7B", "base_model:merge:Envoid/Mixtral-Instruct-ITR-8x7B", "base_model:NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss", "base_model:merge:NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss", "base_model:retrieval-bar/Mixtral-8x7B-v0.1_case-briefs", "base_model:merge:retrieval-bar/Mixtral-8x7B-v0.1_case-briefs", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T11:33:35Z
--- base_model: - Envoid/Mixtral-Instruct-ITR-8x7B - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora - Envoid/Mixtral-Instruct-ITR-8x7B - retrieval-bar/Mixtral-8x7B-v0.1_case-briefs - NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss - Envoid/Mixtral-Instruct-ITR-8x7B tags: - mergekit - merge --- # ohno-8x7b this... will either be my magnum opus... or terrible. no inbetweens! Post-test verdict: It's mostly braindamaged. Might be my settings or something, idk. the `./output` mentioned below is my own merge using identical recipe as [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B). # output_merge2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) as a base. ### Models Merged The following models were included in the merge: * ./output/ + /ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b * [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora) * [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) + [retrieval-bar/Mixtral-8x7B-v0.1_case-briefs](https://huggingface.co/retrieval-bar/Mixtral-8x7B-v0.1_case-briefs) * [NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ./output/+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b parameters: density: 0.66 weight: 1.0 - model: Envoid/Mixtral-Instruct-ITR-8x7B+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs parameters: density: 0.1 weight: 0.25 - model: Envoid/Mixtral-Instruct-ITR-8x7B+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora parameters: density: 0.66 weight: 0.5 - model: NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss parameters: density: 0.15 weight: 0.3 merge_method: dare_ties base_model: Envoid/Mixtral-Instruct-ITR-8x7B dtype: float16 ```
jungyuko/DAVinCI-Yi-Ko-6B-v1.1
jungyuko
2024-02-16T08:51:00Z
58
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T08:11:16Z
--- license: cc-by-nc-4.0 --- ## DAVinCI-Yi-Ko-6B-v1.1 This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an unknown dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hypuerparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 4 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 8 * total_train_batch_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
thrunlab/Mistral_Sparse_refined_web_70p_2024-02-15
thrunlab
2024-02-16T08:49:13Z
4
0
transformers
[ "transformers", "safetensors", "sparse_mistral", "text-generation", "generated_from_trainer", "custom_code", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-02-15T08:42:43Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: Mistral_Sparse_refined_web_70p_2024-02-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral_Sparse_refined_web_70p_2024-02-15 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 0 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0372 | 0.0 | 25 | 3.1256 | | 2.6176 | 0.01 | 50 | 2.8951 | | 2.5321 | 0.01 | 75 | 2.7409 | | 2.4603 | 0.02 | 100 | 2.6753 | | 2.4033 | 0.02 | 125 | 2.6424 | | 2.4821 | 0.02 | 150 | 2.6147 | | 2.4008 | 0.03 | 175 | 2.5858 | | 2.3651 | 0.03 | 200 | 2.5688 | | 2.3873 | 0.04 | 225 | 2.5565 | | 2.4145 | 0.04 | 250 | 2.5470 | | 2.3295 | 0.04 | 275 | 2.5321 | | 2.3458 | 0.05 | 300 | 2.5185 | | 2.3587 | 0.05 | 325 | 2.5146 | | 2.1873 | 0.06 | 350 | 2.5093 | | 2.3502 | 0.06 | 375 | 2.5093 | | 2.3837 | 0.06 | 400 | 2.5021 | | 2.3747 | 0.07 | 425 | 2.4994 | | 2.3292 | 0.07 | 450 | 2.4957 | | 2.2438 | 0.08 | 475 | 2.4940 | | 2.3102 | 0.08 | 500 | 2.4889 | | 2.3791 | 0.08 | 525 | 2.4858 | | 2.2743 | 0.09 | 550 | 2.4827 | | 2.4148 | 0.09 | 575 | 2.4813 | | 2.2115 | 0.1 | 600 | 2.4830 | | 2.2963 | 0.1 | 625 | 2.4834 | | 2.3762 | 0.1 | 650 | 2.4805 | | 2.3657 | 0.11 | 675 | 2.4764 | | 2.3219 | 0.11 | 700 | 2.4746 | | 2.3166 | 0.12 | 725 | 2.4712 | | 2.2193 | 0.12 | 750 | 2.4747 | | 2.2629 | 0.12 | 775 | 2.4703 | | 2.3504 | 0.13 | 800 | 2.4732 | | 2.3523 | 0.13 | 825 | 2.4662 | | 2.3362 | 0.14 | 850 | 2.4645 | | 2.202 | 0.14 | 875 | 2.4659 | | 2.2795 | 0.14 | 900 | 2.4682 | | 2.2254 | 0.15 | 925 | 2.4621 | | 2.3507 | 0.15 | 950 | 2.4642 | | 2.2825 | 0.16 | 975 | 2.4624 | | 2.3301 | 0.16 | 1000 | 2.4603 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel-MISTRAL-notrescaled_30000
DragosGorduza
2024-02-16T08:49:03Z
46
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-16T08:48:23Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 64875 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 80000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
shanhy/xlm-roberta-base_lr0.0001_seed42_basic_original_esp-hau-eng_train
shanhy
2024-02-16T08:48:02Z
101
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T08:46:59Z
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base_lr0.0001_seed42_basic_original_esp-hau-eng_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base_lr0.0001_seed42_basic_original_esp-hau-eng_train This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0382 - Spearman Corr: 0.5403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.45 | 200 | 0.0446 | 0.0681 | | 0.0622 | 2.91 | 400 | 0.0538 | nan | | 0.0586 | 4.36 | 600 | 0.0468 | -0.0637 | | 0.0586 | 5.82 | 800 | 0.0450 | -0.0228 | | 0.0578 | 7.27 | 1000 | 0.0468 | nan | | 0.0569 | 8.73 | 1200 | 0.0437 | 0.1100 | | 0.0538 | 10.18 | 1400 | 0.0431 | 0.2566 | | 0.0538 | 11.64 | 1600 | 0.0465 | nan | | 0.0539 | 13.09 | 1800 | 0.0447 | 0.0202 | | 0.0558 | 14.55 | 2000 | 0.0463 | 0.1008 | | 0.0561 | 16.0 | 2200 | 0.0456 | nan | | 0.0561 | 17.45 | 2400 | 0.0448 | 0.0437 | | 0.0552 | 18.91 | 2600 | 0.0473 | 0.0297 | | 0.0512 | 20.36 | 2800 | 0.0522 | 0.1850 | | 0.0512 | 21.82 | 3000 | 0.0379 | 0.4856 | | 0.0401 | 23.27 | 3200 | 0.0401 | 0.4589 | | 0.0315 | 24.73 | 3400 | 0.0396 | 0.4645 | | 0.0277 | 26.18 | 3600 | 0.0387 | 0.4813 | | 0.0277 | 27.64 | 3800 | 0.0376 | 0.5494 | | 0.0258 | 29.09 | 4000 | 0.0382 | 0.5403 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel-MISTRAL-notrescaled_50000
DragosGorduza
2024-02-16T08:47:38Z
46
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-16T08:46:57Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 64875 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 80000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel-MISTRAL-notrescaled_10000
DragosGorduza
2024-02-16T08:46:56Z
47
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-16T08:46:15Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 64875 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 80000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel-MISTRAL-notrescaled_80000
DragosGorduza
2024-02-16T08:46:15Z
47
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-16T08:45:34Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 64875 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 80000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel-MISTRAL-notrescaled_60000
DragosGorduza
2024-02-16T08:45:34Z
48
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-16T08:44:51Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 64875 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 80000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
DragosGorduza/FRPile_GPL_test_pipeline_DragosGorduza-FRPile_MLM_Basel-MISTRAL-notrescaled_20000
DragosGorduza
2024-02-16T08:44:07Z
46
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-16T08:43:20Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 64875 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 80000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Shijia/furina_seed42_eng_kin_amh_latin_2e-05
Shijia
2024-02-16T08:42:46Z
89
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T08:41:14Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_kin_amh_latin_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_kin_amh_latin_2e-05 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0242 - Spearman Corr: 0.7470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.75 | 200 | 0.0281 | 0.6376 | | 0.0807 | 3.51 | 400 | 0.0246 | 0.7176 | | 0.0216 | 5.26 | 600 | 0.0249 | 0.7301 | | 0.0153 | 7.02 | 800 | 0.0226 | 0.7384 | | 0.0119 | 8.77 | 1000 | 0.0236 | 0.7484 | | 0.0096 | 10.53 | 1200 | 0.0247 | 0.7425 | | 0.0079 | 12.28 | 1400 | 0.0226 | 0.7424 | | 0.0068 | 14.04 | 1600 | 0.0236 | 0.7499 | | 0.0068 | 15.79 | 1800 | 0.0235 | 0.7482 | | 0.006 | 17.54 | 2000 | 0.0256 | 0.7489 | | 0.0054 | 19.3 | 2200 | 0.0240 | 0.7489 | | 0.005 | 21.05 | 2400 | 0.0242 | 0.7470 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
nldatascientist/hyp_vllm_v01_GGUF
nldatascientist
2024-02-16T08:41:09Z
6
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-16T08:38:41Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/mistral-7b-bnb-4bit --- # Uploaded model - **Developed by:** nldatascientist - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vilm/Quyen-Pro-v0.1-GGUF
vilm
2024-02-16T08:38:18Z
62
5
transformers
[ "transformers", "gguf", "en", "dataset:teknium/OpenHermes-2.5", "dataset:LDJnr/Capybara", "dataset:Intel/orca_dpo_pairs", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-06T16:39:52Z
--- library_name: transformers license: other datasets: - teknium/OpenHermes-2.5 - LDJnr/Capybara - Intel/orca_dpo_pairs - argilla/distilabel-intel-orca-dpo-pairs language: - en --- # Quyen <img src="quyen.webp" width="512" height="512" alt="Quyen"> # Model Description Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions: - **Quyen-SE (0.5B)** - **Quyen-Mini (1.8B)** - **Quyen (4B)** - **Quyen-Plus (7B)** - **Quyen-Pro (14B)** - **Quyen-Pro-Max (72B)** All models were trained with SFT and DPO using the following dataset: - *OpenHermes-2.5* by **Teknium** - *Capyabara* by **LDJ** - *distilabel-intel-orca-dpo-pairs* by **argilla** - *orca_dpo_pairs* by **Intel** - and Private Data by **Ontocord** & **BEE-spoke-data** # Prompt Template - All Quyen models use ChatML as the default template: ``` <|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Hello world.<|im_end|> <|im_start|>assistant ``` - You can also use `apply_chat_template`: ```python messages = [ {"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Hello world."} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # Benchmarks: - Coming Soon! We will update the benchmarks later # Acknowledgement - We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
wesleyyip/pony
wesleyyip
2024-02-16T08:37:39Z
0
0
null
[ "license:other", "region:us" ]
null
2024-02-16T08:36:41Z
--- license: other license_name: other license_link: LICENSE ---
isha1/sap_abap_ft_model_orig_150
isha1
2024-02-16T08:34:55Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T08:32:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
worldboss/tinyllama-1.8B-Chat-nia-elbowai-ft-v2
worldboss
2024-02-16T08:26:30Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T08:26:27Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
fzzhang/mistralv1_gsm8k_merged
fzzhang
2024-02-16T08:24:57Z
47
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:gsm8k", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T08:11:53Z
--- library_name: transformers license: apache-2.0 datasets: - gsm8k language: - en pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
technocrat3128/sentiment_analysis_Twitter_roberta_Balanced_2
technocrat3128
2024-02-16T08:15:26Z
96
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T08:15:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shijia/furina_seed42_eng_amh_hau_latin_2e-05
Shijia
2024-02-16T08:12:30Z
100
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T08:11:01Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_amh_hau_latin_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_amh_hau_latin_2e-05 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0206 - Spearman Corr: 0.7847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.55 | 200 | 0.0268 | 0.6876 | | 0.091 | 3.1 | 400 | 0.0215 | 0.7561 | | 0.0258 | 4.65 | 600 | 0.0199 | 0.7798 | | 0.0187 | 6.2 | 800 | 0.0210 | 0.7878 | | 0.0187 | 7.75 | 1000 | 0.0213 | 0.7817 | | 0.0148 | 9.3 | 1200 | 0.0238 | 0.7787 | | 0.0126 | 10.85 | 1400 | 0.0203 | 0.7780 | | 0.0102 | 12.4 | 1600 | 0.0207 | 0.7800 | | 0.0088 | 13.95 | 1800 | 0.0206 | 0.7863 | | 0.0088 | 15.5 | 2000 | 0.0223 | 0.7771 | | 0.0079 | 17.05 | 2200 | 0.0206 | 0.7847 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
ImNotTarzan/finetuning-sentiment-model-3000-samples
ImNotTarzan
2024-02-16T08:11:28Z
195
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-13T08:37:07Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cpu - Datasets 2.17.0 - Tokenizers 0.15.1
UsmanAXAI/whisper-small-ft-common-voice-11-ar
UsmanAXAI
2024-02-16T08:07:00Z
64
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "ar-asr-leaderboard", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-02-15T08:35:33Z
--- language: - hi license: apache-2.0 tags: - ar-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer base_model: openai/whisper-small model-index: - name: Whisper Small Ar - AxAI results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: ar split: None args: 'config: ar, split: test[:10%]' metrics: - type: wer value: 116.16948508455253 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Ar - AxAI This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.8021 - Wer: 116.1695 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0331 | 4.15 | 1000 | 0.5950 | 130.9519 | | 0.0031 | 8.3 | 2000 | 0.7200 | 114.9724 | | 0.0006 | 12.45 | 3000 | 0.7821 | 116.3405 | | 0.0006 | 16.6 | 4000 | 0.8021 | 116.1695 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
ppisljar/slo_g2p_byt5
ppisljar
2024-02-16T08:06:43Z
0
0
nemo
[ "nemo", "onnx", "sl", "license:cc-by-3.0", "region:us" ]
null
2024-01-22T12:49:03Z
--- license: cc-by-3.0 language: - sl metrics: - wer - cer library_name: nemo --- Slovenian G2P model google/byt5-small trained on G2P task, with sentence level dataset CER: 0.25% check infer.py for example usage
Habana/albert-xxlarge-v1
Habana
2024-02-16T08:04:52Z
3,087
0
null
[ "optimum_habana", "license:apache-2.0", "region:us" ]
null
2022-04-22T18:05:35Z
--- license: apache-2.0 --- [Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks. Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana). ## ALBERT XXLarge model HPU configuration This model only contains the `GaudiConfig` file for running the [albert-xxlarge-v1](https://huggingface.co/albert-xxlarge-v1) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_torch_autocast`: whether to use PyTorch's autocast mixed precision - `use_fused_adam`: whether to use Habana's custom AdamW implementation - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator ## Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with ALBERT XXL with the following command: ```bash python run_qa.py \ --model_name_or_path albert-xxlarge-v1 \ --gaudi_config_name Habana/albert-xxlarge-v1 \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --per_device_eval_batch_size 2 \ --learning_rate 5e-6 \ --num_train_epochs 2 \ --max_seq_length 384 \ --output_dir /tmp/squad/ \ --use_habana \ --use_lazy_mode \ --throughput_warmup_steps 3 \ --bf16 ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
Brackly/malawi_quantized_model
Brackly
2024-02-16T08:03:28Z
60
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-16T07:59:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Isotonic/smol_llama_DialogSumm
Isotonic
2024-02-16T07:57:03Z
91
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:Felladrin/Smol-Llama-101M-Chat-v1", "base_model:finetune:Felladrin/Smol-Llama-101M-Chat-v1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-09T14:47:23Z
--- license: apache-2.0 base_model: Felladrin/Smol-Llama-101M-Chat-v1 tags: - generated_from_trainer metrics: - accuracy model-index: - name: smol_llama_DialogSumm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smol_llama_DialogSumm This model is a fine-tuned version of [Felladrin/Smol-Llama-101M-Chat-v1](https://huggingface.co/Felladrin/Smol-Llama-101M-Chat-v1) on [Isotonic/DialogSumm](https://huggingface.co/datasets/Isotonic/DialogSumm) dataset. It achieves the following results on the evaluation set: - Loss: 1.8918 - Accuracy: 0.6050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 411 | 2.0053 | 0.5871 | | 2.0885 | 2.0 | 822 | 1.9287 | 0.5971 | | 1.8728 | 3.0 | 1233 | 1.8916 | 0.6039 | | 1.7214 | 4.0 | 1644 | 1.8918 | 0.6050 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1
Shijia/furina_seed42_eng_amh_esp_cross_2e-05
Shijia
2024-02-16T07:48:25Z
99
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T07:46:54Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_amh_esp_cross_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_amh_esp_cross_2e-05 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0188 - Spearman Corr: 0.7538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.53 | 200 | 0.0279 | 0.6644 | | No log | 1.06 | 400 | 0.0165 | 0.7300 | | No log | 1.59 | 600 | 0.0170 | 0.7252 | | 0.0434 | 2.12 | 800 | 0.0157 | 0.7411 | | 0.0434 | 2.65 | 1000 | 0.0160 | 0.7515 | | 0.0434 | 3.17 | 1200 | 0.0162 | 0.7554 | | 0.0434 | 3.7 | 1400 | 0.0160 | 0.7602 | | 0.019 | 4.23 | 1600 | 0.0170 | 0.7490 | | 0.019 | 4.76 | 1800 | 0.0169 | 0.7468 | | 0.019 | 5.29 | 2000 | 0.0165 | 0.7473 | | 0.019 | 5.82 | 2200 | 0.0181 | 0.7492 | | 0.0138 | 6.35 | 2400 | 0.0188 | 0.7538 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
jaehy12/jh_new_2
jaehy12
2024-02-16T07:47:56Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T07:38:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mohamedarsath/dog
Mohamedarsath
2024-02-16T07:44:20Z
1
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-16T07:35:47Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### dog Dreambooth model trained by Mohamedarsath following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: JPCE-063 Sample pictures of this concept: ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65cf053213fcc46fa6e91e79/gwltYGJYN6U3GMt31qkLz.jpeg) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65cf053213fcc46fa6e91e79/zM_aF3HrCcRg3tUO74lES.jpeg)
LegoClipStars/Olivia_Woods_RH
LegoClipStars
2024-02-16T07:33:00Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:cagliostrolab/animagine-xl-3.0", "base_model:adapter:cagliostrolab/animagine-xl-3.0", "license:cc-by-4.0", "region:us" ]
text-to-image
2024-02-16T07:32:20Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: NEFT parameters: negative_prompt: High school student output: url: images/03c0f2cdac101fd97c150f00b4a871d4.jpg base_model: cagliostrolab/animagine-xl-3.0 instance_prompt: Please spare me license: cc-by-4.0 --- # Olivia_Woods_RH <Gallery /> ## Model description Here&#39;s my RVC voice model of Olivia Woods from Rainbow High season 4. ## Trigger words You should use `Please spare me` to trigger the image generation. ## Download model [Download](/LegoClipStars/Olivia_Woods_RH/tree/main) them in the Files & versions tab.
VenkateshSoni/roberta-finetuned-subjqa-movies_2
VenkateshSoni
2024-02-16T07:21:15Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:deepset/roberta-base-squad2", "base_model:finetune:deepset/roberta-base-squad2", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-16T07:20:38Z
--- license: cc-by-4.0 base_model: deepset/roberta-base-squad2 tags: - generated_from_trainer model-index: - name: roberta-finetuned-subjqa-movies_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-movies_2 This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
sravaniayyagari/llama2-finetuned-model-latest
sravaniayyagari
2024-02-16T07:18:53Z
60
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-16T06:41:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dost-asti/BigBird-tl-cased
dost-asti
2024-02-16T07:09:03Z
118
0
transformers
[ "transformers", "pytorch", "big_bird", "fill-mask", "Tagalog", "Taglish", "tl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-17T03:44:00Z
--- language: - tl tags: - Tagalog - Taglish --- ## Model Description As part of the ITANONG project's 10 billion-token Tagalog dataset, we have introduced our initial pre-trained language models for Philippine languages. Our model suite encompasses various BERT-based, GPT-based, and Sentence Transformers tailored for Tagalog,Taglish and Cebuano. ## Training Details This model was trained using an Nvidia V100-32GB GPU on DOST-ASTI Computing and Archiving Research Environment (COARE) - https://asti.dost.gov.ph/projects/coare/ ### Training Data The training dataset was compiled from both formal and informal sources, consisting of 5,159,917 instances from formal channels and 3,057,180 from informal sources. More information on pre-processing and training parameters on our paper ## Citation Paper : iTANONG-DS : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Language Bibtex: ``` @inproceedings{visperas-etal-2023-itanong, title = "i{TANONG}-{DS} : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select {P}hilippine Languages", author = "Visperas, Moses L. and Borjal, Christalline Joie and Adoptante, Aunhel John M and Abacial, Danielle Shine R. and Decano, Ma. Miciella and Peramo, Elmer C", editor = "Abbas, Mourad and Freihat, Abed Alhakim", booktitle = "Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)", month = dec, year = "2023", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.icnlsp-1.34", pages = "316--323", } ```
dost-asti/gpt2-tl-cased
dost-asti
2024-02-16T07:08:48Z
191
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "Tagalog", "Taglish", "tl", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-17T03:24:35Z
--- language: - tl tags: - Tagalog - Taglish - gpt2 --- ## Model Description As part of the ITANONG project's 10 billion-token Tagalog dataset, we have introduced our initial pre-trained language models for Philippine languages. Our model suite encompasses various BERT-based, GPT-based, and Sentence Transformers tailored for Tagalog,Taglish and Cebuano. ## Training Details This model was trained using an Nvidia V100-32GB GPU on DOST-ASTI Computing and Archiving Research Environment (COARE) - https://asti.dost.gov.ph/projects/coare/ ### Training Data The training dataset was compiled from both formal and informal sources, consisting of 5,159,917 instances from formal channels and 3,057,180 from informal sources. More information on pre-processing and training parameters on our paper ## Citation Paper : iTANONG-DS : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Language Bibtex: ``` @inproceedings{visperas-etal-2023-itanong, title = "i{TANONG}-{DS} : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select {P}hilippine Languages", author = "Visperas, Moses L. and Borjal, Christalline Joie and Adoptante, Aunhel John M and Abacial, Danielle Shine R. and Decano, Ma. Miciella and Peramo, Elmer C", editor = "Abbas, Mourad and Freihat, Abed Alhakim", booktitle = "Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)", month = dec, year = "2023", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.icnlsp-1.34", pages = "316--323", } ```
dost-asti/BERT-tl-cased
dost-asti
2024-02-16T07:08:29Z
111
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "Tagalog", "Taglish", "tl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-16T14:09:35Z
--- language: - tl tags: - Tagalog - Taglish --- ## Model Description As part of the ITANONG project's 10 billion-token Tagalog dataset, we have introduced our initial pre-trained language models for Philippine languages. Our model suite encompasses various BERT-based, GPT-based, and Sentence Transformers tailored for Tagalog,Taglish and Cebuano. ## Training Details This model was trained using an Nvidia V100-32GB GPU on DOST-ASTI Computing and Archiving Research Environment (COARE) - https://asti.dost.gov.ph/projects/coare/ ### Training Data The training dataset was compiled from both formal and informal sources, consisting of 5,159,917 instances from formal channels and 3,057,180 from informal sources. More information on pre-processing and training parameters on our paper ## Citation Paper : iTANONG-DS : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Language Bibtex: ``` @inproceedings{visperas-etal-2023-itanong, title = "i{TANONG}-{DS} : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select {P}hilippine Languages", author = "Visperas, Moses L. and Borjal, Christalline Joie and Adoptante, Aunhel John M and Abacial, Danielle Shine R. and Decano, Ma. Miciella and Peramo, Elmer C", editor = "Abbas, Mourad and Freihat, Abed Alhakim", booktitle = "Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)", month = dec, year = "2023", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.icnlsp-1.34", pages = "316--323", } ```
CatBarks/GPT2ES_ClassWeighted10_tokenizer
CatBarks
2024-02-16T07:08:12Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-16T07:08:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dost-asti/RoBERTa-ceb-cased
dost-asti
2024-02-16T07:07:30Z
101
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "RoBERTa", "Cebuano", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-15T04:26:18Z
--- tags: - RoBERTa - Cebuano --- ## Model Description As part of the ITANONG project's 10 billion-token Tagalog dataset, we have introduced our initial pre-trained language models for Philippine languages. Our model suite encompasses various BERT-based, GPT-based, and Sentence Transformers tailored for Tagalog,Taglish and Cebuano. ## Training Details This model was trained using an Nvidia V100-32GB GPU on DOST-ASTI Computing and Archiving Research Environment (COARE) - https://asti.dost.gov.ph/projects/coare/ ### Training Data The training dataset was compiled from both formal and informal sources, consisting of 194,001 instances from formal channels and 1,816,735 from informal sources. More information on pre-processing and training parameters on our paper ## Citation Paper : iTANONG-DS : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Language Bibtex: ``` @inproceedings{visperas-etal-2023-itanong, title = "i{TANONG}-{DS} : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select {P}hilippine Languages", author = "Visperas, Moses L. and Borjal, Christalline Joie and Adoptante, Aunhel John M and Abacial, Danielle Shine R. and Decano, Ma. Miciella and Peramo, Elmer C", editor = "Abbas, Mourad and Freihat, Abed Alhakim", booktitle = "Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)", month = dec, year = "2023", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.icnlsp-1.34", pages = "316--323", } ```
dost-asti/BERT-ceb-cased
dost-asti
2024-02-16T07:06:46Z
158
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "BERT", "Cebuano", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-11-15T02:22:08Z
--- tags: - BERT - Cebuano --- ## Model Description As part of the ITANONG project's 10 billion-token Tagalog dataset, we have introduced our initial pre-trained language models for Philippine languages. Our model suite encompasses various BERT-based, GPT-based, and Sentence Transformers tailored for Tagalog,Taglish and Cebuano. ## Training Details This model was trained using an Nvidia V100-32GB GPU on DOST-ASTI Computing and Archiving Research Environment (COARE) - https://asti.dost.gov.ph/projects/coare/ ### Training Data The training dataset was compiled from both formal and informal sources, consisting of 194,001 instances from formal channels and 1,816,735 from informal sources. More information on pre-processing and training parameters on our paper ## Citation Paper : iTANONG-DS : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select Philippine Language Bibtex: ``` @inproceedings{visperas-etal-2023-itanong, title = "i{TANONG}-{DS} : A Collection of Benchmark Datasets for Downstream Natural Language Processing Tasks on Select {P}hilippine Languages", author = "Visperas, Moses L. and Borjal, Christalline Joie and Adoptante, Aunhel John M and Abacial, Danielle Shine R. and Decano, Ma. Miciella and Peramo, Elmer C", editor = "Abbas, Mourad and Freihat, Abed Alhakim", booktitle = "Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)", month = dec, year = "2023", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.icnlsp-1.34", pages = "316--323", } ```
Krooz/placement-classification-mistral-7b-instruct-v1
Krooz
2024-02-16T07:04:16Z
6
0
peft
[ "peft", "safetensors", "Education", "text-classification", "en", "dataset:Krooz/Campus_Recruitment_Text", "license:cc", "region:us" ]
text-classification
2024-02-15T16:22:21Z
--- license: cc datasets: - Krooz/Campus_Recruitment_Text language: - en library_name: peft pipeline_tag: text-classification tags: - Education --- ## Recruitment Guide Mistral 7B-Instruct Mistral 7B instruct fine-tuned on the [Campus Recruitment Text](https://huggingface.co/datasets/Krooz/Campus_Recruitment_Text) dataset with LoRA and 4bit quantization. See the Github [repository](https://github.com/Kirushikesh/Campus_Recruitment_Prediction_LLM) for training details. This model is trained with student's university record as input and, Placement status as output. Try out the application in huggingface [spaces](https://huggingface.co/spaces/Krooz/Campus_Recruitment_Demo) ## Usage This repo contains the LoRA parameters of the fine-tuned Mistral 7B model. To perform inference, load and use the model as follows: ``` import torch from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig def format_instruction(input): return return f"""### Instruction: Classify the student into Placed/NotPlaced based on his/her college report details. The report includes marks scored by the student in various courses and extra curricular activities taken by them. ### Report: {input} ### Label: """ # input is a report card of an university graduate prompt = """John is a college-level student who has demonstrated academic excellence throughout his schooling journey. He has a cumulative GPA of 6.7 in his university, indicating his strong academic abilities. Additionally, he scored 63 on an aptitude test, showcasing his analytical and problem-solving skills. John has also engaged in one project, demonstrating his creativity and practical skills. In terms of extracurricular activities, John is actively involved in a range of areas. He has participated in one project, which showcases his ability to work collaboratively and achieve results independently. However, he has zero internships and zero workshops/certifications, which could have been an area for improvement. In terms of soft skills, John has a rating of 3.8, which suggests that he has strong social and communication capabilities. He has no placement training, meaning he would benefit from gaining hands-on experience in a professional environment. Overall, John has good academic and extracurricular achievements but would benefit from gaining more practical work experience and soft skills training. He has the potential to be an excellent candidate in the future, and it would be beneficial to him to further develop these areas.""" prompt = format_instruction(prompt) # load base LLM model, LoRA params and tokenizer model = AutoPeftModelForCausalLM.from_pretrained( "Krooz/placement-classification-mistral-7b-instruct-v1", low_cpu_mem_usage=True, torch_dtype=torch.float16, load_in_4bit=True, ) tokenizer = AutoTokenizer.from_pretrained("Krooz/placement-classification-mistral-7b-instruct-v1") input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda() # inference with torch.inference_mode(): outputs = model.generate( input_ids=input_ids, max_new_tokens=100, do_sample=False ) # decode output tokens and strip response outputs = outputs.detach().cpu().numpy() outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True) output = outputs[0][len(prompt):] ``` References: * https://medium.com/@jeremyarancio/fine-tune-an-llm-on-your-personal-data-create-a-the-lord-of-the-rings-storyteller-6826dd614fa9 * https://blog.neuralwork.ai/an-llm-fine-tuning-cookbook-with-mistral-7b/ * https://blog.gopenai.com/fine-tuning-mistral-7b-instruct-model-in-colab-a-beginners-guide-0f7bebccf11c
satheeshTM/distilbert-base-uncased-finetuned-emotion
satheeshTM
2024-02-16T07:04:13Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-10T04:50:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9263544647982521 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2262 - Accuracy: 0.9265 - F1: 0.9264 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8603 | 1.0 | 250 | 0.3328 | 0.902 | 0.9014 | | 0.2575 | 2.0 | 500 | 0.2262 | 0.9265 | 0.9264 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
worldboss/tinyllama-1.8B-Chat-elbowai-ft
worldboss
2024-02-16T07:01:34Z
0
0
null
[ "safetensors", "autotrain", "text-generation", "conversational", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T07:01:31Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
shanhy/xlm-roberta-base_lr5e-06_seed42_basic_original_kin-amh-eng_train
shanhy
2024-02-16T06:54:38Z
89
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T06:53:33Z
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base_lr5e-06_seed42_basic_original_kin-amh-eng_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base_lr5e-06_seed42_basic_original_kin-amh-eng_train This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0312 - Spearman Corr: 0.7479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.75 | 200 | 0.0278 | 0.6150 | | 0.0637 | 3.51 | 400 | 0.0254 | 0.6786 | | 0.0312 | 5.26 | 600 | 0.0284 | 0.7074 | | 0.0249 | 7.02 | 800 | 0.0320 | 0.7195 | | 0.0199 | 8.77 | 1000 | 0.0344 | 0.7285 | | 0.0189 | 10.53 | 1200 | 0.0303 | 0.7318 | | 0.0167 | 12.28 | 1400 | 0.0299 | 0.7356 | | 0.0154 | 14.04 | 1600 | 0.0319 | 0.7411 | | 0.0154 | 15.79 | 1800 | 0.0304 | 0.7382 | | 0.0141 | 17.54 | 2000 | 0.0327 | 0.7460 | | 0.0134 | 19.3 | 2200 | 0.0305 | 0.7507 | | 0.0123 | 21.05 | 2400 | 0.0312 | 0.7479 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
Jarmac/lab1_random
Jarmac
2024-02-16T06:54:15Z
117
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-16T04:36:38Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 model-index: - name: lab1_random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lab1_random This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
Fukurokun/MemGPT-DPO-MoE-2-mem-6.0bpw-exl2
Fukurokun
2024-02-16T06:53:41Z
3
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T06:52:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fliarbi/phi-2-hummanize1
fliarbi
2024-02-16T06:50:25Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-02-16T06:50:15Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi-2-hummanize1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-hummanize1 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.7074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0025 - train_batch_size: 20 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 400 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.02 | 1 | 2.0864 | | No log | 0.05 | 2 | 4.0671 | | No log | 0.07 | 3 | 3.6332 | | No log | 0.09 | 4 | 2.5537 | | 3.0197 | 0.11 | 5 | 2.3394 | | 3.0197 | 0.14 | 6 | 2.8862 | | 3.0197 | 0.16 | 7 | 2.5140 | | 3.0197 | 0.18 | 8 | 2.4603 | | 3.0197 | 0.21 | 9 | 2.2094 | | 2.5958 | 0.23 | 10 | 2.1767 | | 2.5958 | 0.25 | 11 | 2.3343 | | 2.5958 | 0.28 | 12 | 2.2511 | | 2.5958 | 0.3 | 13 | 2.1854 | | 2.5958 | 0.32 | 14 | 2.1385 | | 2.2944 | 0.34 | 15 | 2.3556 | | 2.2944 | 0.37 | 16 | 2.2056 | | 2.2944 | 0.39 | 17 | 2.2127 | | 2.2944 | 0.41 | 18 | 2.1507 | | 2.2944 | 0.44 | 19 | 2.1388 | | 2.2841 | 0.46 | 20 | 2.6540 | | 2.2841 | 0.48 | 21 | 2.8934 | | 2.2841 | 0.51 | 22 | 3.0981 | | 2.2841 | 0.53 | 23 | 2.4155 | | 2.2841 | 0.55 | 24 | 2.1754 | | 2.7585 | 0.57 | 25 | 2.0927 | | 2.7585 | 0.6 | 26 | 2.0865 | | 2.7585 | 0.62 | 27 | 2.2345 | | 2.7585 | 0.64 | 28 | 2.4123 | | 2.7585 | 0.67 | 29 | 2.7718 | | 2.3906 | 0.69 | 30 | 4.2964 | | 2.3906 | 0.71 | 31 | 6.5295 | | 2.3906 | 0.73 | 32 | 5.8489 | | 2.3906 | 0.76 | 33 | 7.2467 | | 2.3906 | 0.78 | 34 | 7.6353 | | 6.5839 | 0.8 | 35 | 7.7842 | | 6.5839 | 0.83 | 36 | 8.8627 | | 6.5839 | 0.85 | 37 | 7.9511 | | 6.5839 | 0.87 | 38 | 9.7736 | | 6.5839 | 0.9 | 39 | 8.3666 | | 8.6795 | 0.92 | 40 | 8.9768 | | 8.6795 | 0.94 | 41 | 9.0808 | | 8.6795 | 0.96 | 42 | 8.5933 | | 8.6795 | 0.99 | 43 | 8.9317 | | 8.6795 | 1.01 | 44 | 8.5291 | | 8.8177 | 1.03 | 45 | 8.5935 | | 8.8177 | 1.06 | 46 | 8.6773 | | 8.8177 | 1.08 | 47 | 8.5914 | | 8.8177 | 1.1 | 48 | 8.5006 | | 8.8177 | 1.13 | 49 | 8.3959 | | 8.6883 | 1.15 | 50 | 8.2375 | | 8.6883 | 1.17 | 51 | 8.2022 | | 8.6883 | 1.19 | 52 | 8.2063 | | 8.6883 | 1.22 | 53 | 8.2254 | | 8.6883 | 1.24 | 54 | 8.3408 | | 8.4216 | 1.26 | 55 | 8.0367 | | 8.4216 | 1.29 | 56 | 7.8776 | | 8.4216 | 1.31 | 57 | 7.6720 | | 8.4216 | 1.33 | 58 | 7.5050 | | 8.4216 | 1.35 | 59 | 7.3863 | | 7.8151 | 1.38 | 60 | 7.3775 | | 7.8151 | 1.4 | 61 | 7.3820 | | 7.8151 | 1.42 | 62 | 7.2597 | | 7.8151 | 1.45 | 63 | 7.1959 | | 7.8151 | 1.47 | 64 | 7.1233 | | 7.3639 | 1.49 | 65 | 7.0625 | | 7.3639 | 1.52 | 66 | 7.0302 | | 7.3639 | 1.54 | 67 | 6.9862 | | 7.3639 | 1.56 | 68 | 6.9601 | | 7.3639 | 1.58 | 69 | 6.9606 | | 7.1152 | 1.61 | 70 | 6.8977 | | 7.1152 | 1.63 | 71 | 6.8981 | | 7.1152 | 1.65 | 72 | 6.8453 | | 7.1152 | 1.68 | 73 | 6.8523 | | 7.1152 | 1.7 | 74 | 6.8641 | | 6.9712 | 1.72 | 75 | 6.8261 | | 6.9712 | 1.75 | 76 | 6.8273 | | 6.9712 | 1.77 | 77 | 6.8053 | | 6.9712 | 1.79 | 78 | 6.7712 | | 6.9712 | 1.81 | 79 | 6.7542 | | 6.8925 | 1.84 | 80 | 6.7466 | | 6.8925 | 1.86 | 81 | 6.7341 | | 6.8925 | 1.88 | 82 | 6.7255 | | 6.8925 | 1.91 | 83 | 6.7211 | | 6.8925 | 1.93 | 84 | 6.7154 | | 6.8192 | 1.95 | 85 | 6.7103 | | 6.8192 | 1.97 | 86 | 6.7074 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.17.0 - Tokenizers 0.15.2
fliarbi/mistral-hummanize1
fliarbi
2024-02-16T06:49:56Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-02-16T06:49:35Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-hummanize1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-hummanize1 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 12 - total_train_batch_size: 144 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4936 | 0.41 | 50 | 1.4588 | | 1.434 | 0.83 | 100 | 1.4090 | | 1.4032 | 1.24 | 150 | 1.3872 | | 1.392 | 1.65 | 200 | 1.3784 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
shanhy/xlm-roberta-base_lr0.0001_seed42_basic_original_amh-esp-eng_train
shanhy
2024-02-16T06:42:51Z
89
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T06:41:55Z
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base_lr0.0001_seed42_basic_original_amh-esp-eng_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base_lr0.0001_seed42_basic_original_amh-esp-eng_train This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0250 - Spearman Corr: 0.6480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 1.59 | 200 | 0.0288 | 0.5877 | | 0.0428 | 3.17 | 400 | 0.0254 | 0.6442 | | 0.0257 | 4.76 | 600 | 0.0262 | 0.6591 | | 0.0193 | 6.35 | 800 | 0.0268 | 0.6513 | | 0.0193 | 7.94 | 1000 | 0.0304 | 0.6502 | | 0.0156 | 9.52 | 1200 | 0.0233 | 0.6600 | | 0.0129 | 11.11 | 1400 | 0.0238 | 0.6470 | | 0.0112 | 12.7 | 1600 | 0.0269 | 0.6441 | | 0.0097 | 14.29 | 1800 | 0.0269 | 0.6461 | | 0.0097 | 15.87 | 2000 | 0.0316 | 0.6497 | | 0.0097 | 17.46 | 2200 | 0.0255 | 0.6536 | | 0.0088 | 19.05 | 2400 | 0.0269 | 0.6527 | | 0.0082 | 20.63 | 2600 | 0.0257 | 0.6516 | | 0.008 | 22.22 | 2800 | 0.0246 | 0.6578 | | 0.008 | 23.81 | 3000 | 0.0261 | 0.6603 | | 0.0076 | 25.4 | 3200 | 0.0250 | 0.6480 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
tsavage68/chat_1000STEPS_1e6_05beta_DPO
tsavage68
2024-02-16T06:36:41Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T06:33:00Z
--- base_model: meta-llama/Llama-2-7b-chat-hf tags: - trl - dpo - generated_from_trainer model-index: - name: chat_1000STEPS_1e6_05beta_DPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chat_1000STEPS_1e6_05beta_DPO This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7047 - Rewards/chosen: -0.5484 - Rewards/rejected: -0.8442 - Rewards/accuracies: 0.5319 - Rewards/margins: 0.2958 - Logps/rejected: -20.4796 - Logps/chosen: -17.8414 - Logits/rejected: -0.6334 - Logits/chosen: -0.6333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6923 | 0.2 | 100 | 0.6978 | -0.3692 | -0.4056 | 0.4549 | 0.0364 | -19.6025 | -17.4830 | -0.6054 | -0.6052 | | 0.7106 | 0.39 | 200 | 0.7053 | 0.1136 | -0.0026 | 0.4791 | 0.1161 | -18.7964 | -16.5175 | -0.6058 | -0.6056 | | 0.5991 | 0.59 | 300 | 0.7229 | -0.2199 | -0.3741 | 0.4879 | 0.1541 | -19.5394 | -17.1845 | -0.6117 | -0.6115 | | 0.7082 | 0.78 | 400 | 0.7221 | -0.0056 | -0.1904 | 0.5033 | 0.1848 | -19.1721 | -16.7559 | -0.5870 | -0.5868 | | 0.6684 | 0.98 | 500 | 0.7010 | -0.1029 | -0.3043 | 0.5275 | 0.2014 | -19.3998 | -16.9504 | -0.5454 | -0.5452 | | 0.2004 | 1.17 | 600 | 0.6974 | -0.4104 | -0.6928 | 0.5341 | 0.2824 | -20.1768 | -17.5654 | -0.6005 | -0.6004 | | 0.2715 | 1.37 | 700 | 0.7012 | -0.5147 | -0.8128 | 0.5429 | 0.2981 | -20.4169 | -17.7741 | -0.6258 | -0.6257 | | 0.2303 | 1.56 | 800 | 0.7031 | -0.5366 | -0.8347 | 0.5341 | 0.2981 | -20.4606 | -17.8177 | -0.6321 | -0.6320 | | 0.2729 | 1.76 | 900 | 0.7052 | -0.5480 | -0.8437 | 0.5341 | 0.2957 | -20.4787 | -17.8406 | -0.6333 | -0.6331 | | 0.2621 | 1.95 | 1000 | 0.7047 | -0.5484 | -0.8442 | 0.5319 | 0.2958 | -20.4796 | -17.8414 | -0.6334 | -0.6333 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.0+cu117 - Datasets 2.17.0 - Tokenizers 0.15.2
yoon1000/TrOCR_0216
yoon1000
2024-02-16T06:34:58Z
33
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/trocr-base-stage1", "base_model:finetune:microsoft/trocr-base-stage1", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-16T06:32:11Z
--- base_model: microsoft/trocr-base-stage1 tags: - generated_from_trainer model-index: - name: TrOCR_0216 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TrOCR_0216 This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8095 - Cer: 0.0608 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.626 | 0.5 | 500 | 0.9106 | 0.1848 | | 0.5456 | 1.0 | 1000 | 0.7220 | 0.1927 | | 0.4588 | 1.5 | 1500 | 0.7064 | 0.1240 | | 0.6771 | 2.0 | 2000 | 0.7207 | 0.2169 | | 0.3778 | 2.5 | 2500 | 0.6689 | 0.1283 | | 0.4543 | 3.0 | 3000 | 0.6833 | 0.3052 | | 0.4428 | 3.5 | 3500 | 0.6604 | 0.0893 | | 0.4899 | 4.0 | 4000 | 0.7024 | 0.0692 | | 0.2548 | 4.5 | 4500 | 1.0137 | 0.1599 | | 0.0851 | 5.0 | 5000 | 1.8095 | 0.0608 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.13.0 - Tokenizers 0.15.0
millyjep/Milliychloe
millyjep
2024-02-16T06:26:42Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-16T06:26:42Z
--- license: creativeml-openrail-m ---
Trendyol/Trendyol-LLM-7b-chat-v0.1
Trendyol
2024-02-16T06:15:07Z
3,141
108
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "tr", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-30T12:49:34Z
--- language: - tr - en pipeline_tag: text-generation license: apache-2.0 --- <img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1/resolve/main/llama-tr-image.jpeg" alt="drawing" width="400"/> # **Trendyol LLM** Trendyol LLM is a generative model that is based on LLaMa2 7B model. This is the repository for the chat model. ## Model Details **Model Developers** Trendyol **Variations** base and chat variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Trendyol LLM is an auto-regressive language model (based on LLaMa2 7b) that uses an optimized transformer architecture. The chat version is fine-tuned on 180K instruction sets with the following trainables by using LoRA: - **lr**=1e-4 - **lora_rank**=64 - **lora_alpha**=128 - **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj - **modules_to_save**=embed_tokens,lm_head - **lora_dropout**=0.05 - **fp16**=True - **max_seq_length**=1024 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png" alt="drawing" width="600"/> ## Usage ```python from transformers import AutoModelForCausalLM, LlamaTokenizer, pipeline model_id = "Trendyol/Trendyol-LLM-7b-chat-v0.1" tokenizer = LlamaTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map='auto', load_in_8bit=True) sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto", max_new_tokens=1024, return_full_text=True, repetition_penalty=1.1 ) DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n" TEMPLATE = ( "[INST] <<SYS>>\n" "{system_prompt}\n" "<</SYS>>\n\n" "{instruction} [/INST]" ) def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT): return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt}) def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT): prompt = generate_prompt(user_query, sys_prompt) outputs = pipe(prompt, **sampling_params ) return outputs[0]["generated_text"].split("[/INST]")[-1] user_query = "Türkiye'de kaç il var?" response = generate_output(user_query) print(response) ``` with chat template: ```python pipe = pipeline("conversational", model=model, tokenizer=tokenizer, device_map="auto", max_new_tokens=1024, repetition_penalty=1.1 ) messages = [ { "role": "system", "content": "Sen yardımsever bir chatbotsun. Sana verilen diyalog akışına dikkat ederek diyaloğu devam ettir.", }, {"role": "user", "content": "Türkiye'de kaç il var?"} ] outputs = pipe(messages, **sampling_params) print(outputs) ``` ## Limitations, Risks, Bias, and Ethical Considerations ### Limitations and Known Biases - **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified. - **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations. - **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers. ### Risks and Ethical Considerations - **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment. - **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies. - **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks. ### Recommendations for Safe and Ethical Usage - **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly. - **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive. - **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.
aisuko/opt-125m-gptq
aisuko
2024-02-16T06:14:51Z
68
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-22T06:34:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
simprosysneel100/tiny-llam-v2
simprosysneel100
2024-02-16T06:04:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-02-16T05:48:08Z
--- library_name: peft base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
Ashwini1412/asr-nepali
Ashwini1412
2024-02-16T06:03:52Z
66
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-05T13:08:15Z
--- license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - generated_from_trainer metrics: - wer - accuracy model-index: - name: asr-nepali results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # asr-nepali This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 1.0 - Cer: 0.9965 - Accuracy: 0.0035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 180 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---:|:------:|:--------:| | 451.545 | 1.46 | 100 | 43.3285 | 1.0 | 0.9684 | 0.0316 | | 194.4567 | 2.92 | 200 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 4.38 | 300 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 5.84 | 400 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 7.3 | 500 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 8.76 | 600 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 10.22 | 700 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 11.68 | 800 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 13.14 | 900 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 14.6 | 1000 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 16.06 | 1100 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 17.52 | 1200 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 18.98 | 1300 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 20.44 | 1400 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 21.9 | 1500 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 23.36 | 1600 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 24.82 | 1700 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 26.28 | 1800 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 27.74 | 1900 | nan | 1.0 | 0.9965 | 0.0035 | | 0.0 | 29.2 | 2000 | nan | 1.0 | 0.9965 | 0.0035 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
sanjay782/test_qg
sanjay782
2024-02-16T05:49:46Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-hf", "base_model:adapter:NousResearch/Llama-2-7b-hf", "region:us" ]
null
2024-02-16T05:43:21Z
--- library_name: peft base_model: NousResearch/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2.dev0
LarryAIDraw/satsuki
LarryAIDraw
2024-02-16T05:40:15Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-16T05:33:18Z
--- license: creativeml-openrail-m --- https://civitai.com/models/55245/satsukiblue-archive-or-goofy-ai
LarryAIDraw/dim32NaiUzenkyoka-000002
LarryAIDraw
2024-02-16T05:39:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-16T05:32:37Z
--- license: creativeml-openrail-m --- https://civitai.com/models/27859/uzenkyoka-or-mato-seihei-no-slave
LarryAIDraw/dim32Nai5_fubukiazuma_uniform-000003
LarryAIDraw
2024-02-16T05:39:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-16T05:31:50Z
--- license: creativeml-openrail-m --- https://civitai.com/models/15202?modelVersionId=65665
hupenc/flan-t5-small-ChnSentiCorp-2
hupenc
2024-02-16T05:36:23Z
89
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-16T05:36:12Z
--- license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_trainer metrics: - accuracy model-index: - name: flan-t5-small-ChnSentiCorp-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-ChnSentiCorp-2 This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3113 - Accuracy: 0.6253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3573 | 1.0 | 552 | 0.3182 | 0.6225 | | 0.3348 | 2.0 | 1104 | 0.3134 | 0.6369 | | 0.3303 | 3.0 | 1656 | 0.3114 | 0.6211 | | 0.3213 | 4.0 | 2208 | 0.3113 | 0.6253 | | 0.3215 | 5.0 | 2760 | 0.3123 | 0.6294 | | 0.3157 | 6.0 | 3312 | 0.3123 | 0.6397 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.17.0 - Tokenizers 0.15.1
codescv123/ppo-LunarLander-v2
codescv123
2024-02-16T05:36:21Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-16T05:36:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.91 +/- 18.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hemakumari/g_name
hemakumari
2024-02-16T05:28:47Z
90
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T05:28:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Evan-Lin/positive-chosen-llama-chat-without-none
Evan-Lin
2024-02-16T05:19:23Z
1
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-29T10:25:17Z
--- library_name: peft tags: - trl - dpo - generated_from_trainer base_model: meta-llama/Llama-2-7b-chat-hf model-index: - name: dpo-llama-chat-without-none results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dpo-llama-chat-without-none This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.9481 - Rewards/chosen: 4.6795 - Rewards/rejected: 2.8189 - Rewards/accuracies: 0.8547 - Rewards/margins: 1.8606 - Logps/rejected: -60.8495 - Logps/chosen: -50.0326 - Logits/rejected: -0.2216 - Logits/chosen: -0.2323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 6.3 | 0.24 | 100 | 6.1290 | 3.4767 | 3.2110 | 0.5920 | 0.2657 | -56.9286 | -62.0606 | -0.2723 | -0.2654 | | 5.5843 | 0.48 | 200 | 5.8936 | 3.6904 | 3.2305 | 0.6520 | 0.4599 | -56.7330 | -59.9230 | 0.2517 | 0.2475 | | 5.757 | 0.72 | 300 | 5.6694 | 3.9164 | 3.1893 | 0.7253 | 0.7271 | -57.1450 | -57.6631 | 0.3505 | 0.3418 | | 5.5385 | 0.96 | 400 | 5.4629 | 4.1466 | 3.1351 | 0.7600 | 1.0115 | -57.6871 | -55.3611 | 0.2059 | 0.1970 | | 5.2301 | 1.2 | 500 | 5.2891 | 4.3324 | 3.0305 | 0.7880 | 1.3020 | -58.7338 | -53.5027 | 0.1063 | 0.0968 | | 5.0115 | 1.44 | 600 | 5.1601 | 4.4582 | 2.9458 | 0.8213 | 1.5124 | -59.5800 | -52.2452 | -0.1082 | -0.1154 | | 4.9893 | 1.68 | 700 | 5.0431 | 4.5787 | 2.9142 | 0.8413 | 1.6645 | -59.8968 | -51.0404 | -0.1716 | -0.1829 | | 5.0292 | 1.92 | 800 | 4.9770 | 4.6501 | 2.8827 | 0.8427 | 1.7673 | -60.2111 | -50.3266 | -0.1929 | -0.2042 | | 4.331 | 2.16 | 900 | 4.9577 | 4.6724 | 2.8191 | 0.8480 | 1.8534 | -60.8478 | -50.1027 | -0.2005 | -0.2121 | | 4.5481 | 2.4 | 1000 | 4.9481 | 4.6795 | 2.8189 | 0.8547 | 1.8606 | -60.8495 | -50.0326 | -0.2216 | -0.2323 | ### Framework versions - PEFT 0.8.2 - Transformers 4.36.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
thrunlab/Mistral_Sparse_refined_web_90p_2024-02-15
thrunlab
2024-02-16T05:16:51Z
3
0
transformers
[ "transformers", "safetensors", "sparse_mistral", "text-generation", "generated_from_trainer", "custom_code", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-02-16T04:13:03Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: Mistral_Sparse_refined_web_90p_2024-02-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral_Sparse_refined_web_90p_2024-02-15 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.5010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 0 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
EENDA/distilbert-finetuned-squadv2
EENDA
2024-02-16T05:10:45Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-16T02:37:55Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-finetuned-squadv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-squadv2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
ggomma/aika-lora-test-247c3da0-d57b-4072-abba-1447069513f6
ggomma
2024-02-16T04:57:59Z
4
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:KantoRegion/99mix-converted", "base_model:adapter:KantoRegion/99mix-converted", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-16T03:27:28Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true base_model: ggomma/test --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - ggomma/aika-lora-test-247c3da0-d57b-4072-abba-1447069513f6 These are LoRA adaption weights for ggomma/test. The weights were fine-tuned on the ggomma/aika-images dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
FINNUMBER/Yi-Ko-6B-Finch-TQA-full
FINNUMBER
2024-02-16T04:54:36Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T04:17:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kudod/my_awesome_en_vi_model
Kudod
2024-02-16T04:53:47Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:CLAck/en-vi", "base_model:finetune:CLAck/en-vi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-16T04:02:11Z
--- license: apache-2.0 base_model: CLAck/en-vi tags: - generated_from_trainer metrics: - bleu model-index: - name: my_awesome_en_vi_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_en_vi_model This model is a fine-tuned version of [CLAck/en-vi](https://huggingface.co/CLAck/en-vi) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4345 - Bleu: 26.4223 - Gen Len: 33.0654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.3586 | 1.0 | 8333 | 0.4580 | 24.9697 | 32.9346 | | 0.3303 | 2.0 | 16666 | 0.4345 | 26.4223 | 33.0654 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.1+cu117 - Datasets 2.17.0 - Tokenizers 0.15.2
tsavage68/chat_1000STEPS_1e6rate_SFT_SFT
tsavage68
2024-02-16T04:51:33Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T04:48:17Z
--- base_model: meta-llama/Llama-2-7b-chat-hf tags: - trl - sft - generated_from_trainer model-index: - name: chat_1000STEPS_1e6rate_SFT_SFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chat_1000STEPS_1e6rate_SFT_SFT This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3054 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.3957 | 0.2 | 100 | 0.3739 | | 0.3295 | 0.39 | 200 | 0.3239 | | 0.3211 | 0.59 | 300 | 0.3141 | | 0.3047 | 0.78 | 400 | 0.3095 | | 0.3072 | 0.98 | 500 | 0.3072 | | 0.3006 | 1.17 | 600 | 0.3060 | | 0.3109 | 1.37 | 700 | 0.3055 | | 0.2994 | 1.56 | 800 | 0.3054 | | 0.3219 | 1.76 | 900 | 0.3054 | | 0.3016 | 1.95 | 1000 | 0.3054 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.0.0+cu117 - Datasets 2.17.0 - Tokenizers 0.15.2
LoudAI/kubwa-7b-josh
LoudAI
2024-02-16T04:49:42Z
192
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:beomi/llama-2-ko-7b", "base_model:merge:beomi/llama-2-ko-7b", "base_model:defog/sqlcoder-7b-2", "base_model:merge:defog/sqlcoder-7b-2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T04:49:13Z
--- base_model: - beomi/llama-2-ko-7b - defog/sqlcoder-7b-2 library_name: transformers tags: - mergekit - merge --- # sqlcoder-7b-2-slerp This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) * [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: defog/sqlcoder-7b-2 dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, -1] model: model: path: defog/sqlcoder-7b-2 - layer_range: [0, -1] model: model: path: beomi/llama-2-ko-7b ```
Jarmac/lab1_finetuning
Jarmac
2024-02-16T04:35:55Z
118
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-15T22:46:07Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 model-index: - name: lab1_finetuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
Shijia/furina_seed42_eng_kin_amh_cross_5e-06
Shijia
2024-02-16T04:29:49Z
89
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T04:28:26Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_kin_amh_cross_5e-06 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_kin_amh_cross_5e-06 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0235 - Spearman Corr: 0.7429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.59 | 200 | 0.0574 | 0.0907 | | No log | 1.17 | 400 | 0.0433 | 0.3618 | | No log | 1.76 | 600 | 0.0302 | 0.5602 | | 0.0806 | 2.35 | 800 | 0.0310 | 0.6452 | | 0.0806 | 2.93 | 1000 | 0.0311 | 0.6667 | | 0.0806 | 3.52 | 1200 | 0.0341 | 0.6832 | | 0.0313 | 4.11 | 1400 | 0.0263 | 0.6957 | | 0.0313 | 4.69 | 1600 | 0.0366 | 0.7020 | | 0.0313 | 5.28 | 1800 | 0.0311 | 0.7107 | | 0.0313 | 5.87 | 2000 | 0.0340 | 0.7112 | | 0.0257 | 6.45 | 2200 | 0.0251 | 0.7188 | | 0.0257 | 7.04 | 2400 | 0.0229 | 0.7220 | | 0.0257 | 7.62 | 2600 | 0.0243 | 0.7361 | | 0.0226 | 8.21 | 2800 | 0.0217 | 0.7414 | | 0.0226 | 8.8 | 3000 | 0.0231 | 0.7376 | | 0.0226 | 9.38 | 3200 | 0.0233 | 0.7431 | | 0.0226 | 9.97 | 3400 | 0.0257 | 0.7369 | | 0.0199 | 10.56 | 3600 | 0.0241 | 0.7474 | | 0.0199 | 11.14 | 3800 | 0.0253 | 0.7411 | | 0.0199 | 11.73 | 4000 | 0.0257 | 0.7478 | | 0.0178 | 12.32 | 4200 | 0.0267 | 0.7471 | | 0.0178 | 12.9 | 4400 | 0.0235 | 0.7429 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
haochenhe/lab1_finetuning
haochenhe
2024-02-16T04:17:54Z
119
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-15T22:30:38Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 model-index: - name: lab1_finetuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lab1_finetuning This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
Shijia/furina_seed42_eng_kin_amh_cross_2e-05
Shijia
2024-02-16T04:11:11Z
99
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:yihongLiu/furina", "base_model:finetune:yihongLiu/furina", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-16T04:09:44Z
--- base_model: yihongLiu/furina tags: - generated_from_trainer model-index: - name: furina_seed42_eng_kin_amh_cross_2e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furina_seed42_eng_kin_amh_cross_2e-05 This model is a fine-tuned version of [yihongLiu/furina](https://huggingface.co/yihongLiu/furina) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0237 - Spearman Corr: 0.7281 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearman Corr | |:-------------:|:-----:|:----:|:---------------:|:-------------:| | No log | 0.59 | 200 | 0.0274 | 0.5858 | | No log | 1.17 | 400 | 0.0222 | 0.6777 | | No log | 1.76 | 600 | 0.0266 | 0.7074 | | 0.0409 | 2.35 | 800 | 0.0264 | 0.7084 | | 0.0409 | 2.93 | 1000 | 0.0214 | 0.7181 | | 0.0409 | 3.52 | 1200 | 0.0209 | 0.7224 | | 0.02 | 4.11 | 1400 | 0.0213 | 0.7251 | | 0.02 | 4.69 | 1600 | 0.0246 | 0.7180 | | 0.02 | 5.28 | 1800 | 0.0270 | 0.7267 | | 0.02 | 5.87 | 2000 | 0.0232 | 0.7295 | | 0.014 | 6.45 | 2200 | 0.0218 | 0.7259 | | 0.014 | 7.04 | 2400 | 0.0238 | 0.7276 | | 0.014 | 7.62 | 2600 | 0.0224 | 0.7192 | | 0.0102 | 8.21 | 2800 | 0.0237 | 0.7281 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2