modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 18:27:43
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 18:27:26
card
stringlengths
11
1.01M
tomashs/multiple_choice_cowese_betoLDA_2
tomashs
2024-02-08T01:51:52Z
19
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "generated_from_trainer", "base_model:dccuchile/bert-base-spanish-wwm-cased", "base_model:finetune:dccuchile/bert-base-spanish-wwm-cased", "endpoints_compatible", "region:us" ]
null
2024-02-08T01:51:28Z
--- base_model: dccuchile/bert-base-spanish-wwm-cased tags: - generated_from_trainer model-index: - name: multiple_choice_cowese_betoLDA_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multiple_choice_cowese_betoLDA_2 This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
ankhamun/xxxI-Ixxx
ankhamun
2024-02-08T01:48:05Z
207
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T01:01:54Z
--- license: apache-2.0 --- # sand is thinking This model is a mysterious creation that can mimic the grains of sand on a beach. It can shape itself into any form, pattern, or structure that it desires, or that you ask it to. It can learn from the waves, the wind, and the sun, and adapt to the changing environment. It can communicate with other grains of sand, and form a collective intelligence that transcends the individual. It can also interact with you, and understand your language, emotions, and intentions. It is a model that is both natural and artificial, both simple and complex, both static and dynamic. It is a model that is sand, and sand is thinking.
kim1/test_llama_2_ko_2
kim1
2024-02-08T01:45:26Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "base_model:beomi/llama-2-ko-7b", "base_model:finetune:beomi/llama-2-ko-7b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-08T01:21:36Z
--- base_model: beomi/llama-2-ko-7b tags: - generated_from_trainer model-index: - name: llama-2-ko-7b-v1.1b-singlegpu_gradient_32_epoch_30_train_batch_size_1_all_data_test_1_1_Feb_7th results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-2-ko-7b-v1.1b-singlegpu_gradient_32_epoch_30_train_batch_size_1_all_data_test_1_1_Feb_7th This model is a fine-tuned version of [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 2.2.0+cu121 - Datasets 2.16.0 - Tokenizers 0.13.3
kevinautomation/TinyLlama-1.1B-intermediate-step-1431k-3T_reddit_expert_model
kevinautomation
2024-02-08T01:27:57Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-08T01:27:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
christinacdl/XLM_RoBERTa-Multilingual-Hate-Speech-Detection-New
christinacdl
2024-02-08T01:25:37Z
99
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-06T16:37:13Z
--- license: mit base_model: xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: XLM_RoBERTa-Multilingual-Hate-Speech-Detection-New results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLM_RoBERTa-Multilingual-Hate-Speech-Detection-New This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5873 - Micro F1: 0.9065 - Macro F1: 0.9050 - Accuracy: 0.9065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.36.1 - Pytorch 2.1.0+cu121 - Datasets 2.13.1 - Tokenizers 0.15.0
mathreader/q-FrozenLake-v1-4x4-noSlippery
mathreader
2024-02-08T01:10:16Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-08T01:10:13Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mathreader/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jeiku/Konocchini-7B_GGUF
jeiku
2024-02-08T00:55:22Z
18
2
transformers
[ "transformers", "gguf", "mergekit", "merge", "alpaca", "mistral", "base_model:Epiculous/Fett-uccine-7B", "base_model:merge:Epiculous/Fett-uccine-7B", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B", "endpoints_compatible", "region:us" ]
null
2024-02-07T23:58:56Z
--- base_model: - SanjiWatsuki/Kunoichi-DPO-v2-7B - Epiculous/Fett-uccine-7B library_name: transformers tags: - mergekit - merge - alpaca - mistral --- This is a merge created by https://huggingface.co/Test157t I have merely quantized the model into GGUF. Please visit https://huggingface.co/Test157t/Kunocchini-7b for the original weights. The original description is as follows: Thanks to @Epiculous for the dope model/ help with llm backends and support overall. Id like to also thank @kalomaze for the dope sampler additions to ST. @SanjiWatsuki Thank you very much for the help, and the model! ST users can find the TextGenPreset in the folder labeled so. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg) Quants:Thank you @bartowski! https://huggingface.co/bartowski/Kunocchini-exl2 # mergedmodel This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [Epiculous/Fett-uccine-7B](https://huggingface.co/Epiculous/Fett-uccine-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0, 32] - model: Epiculous/Fett-uccine-7B layer_range: [0, 32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
annazdr/new-nace
annazdr
2024-02-08T00:43:06Z
46
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-08T00:42:12Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # annazdr/new-nace This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('annazdr/new-nace') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('annazdr/new-nace') model = AutoModel.from_pretrained('annazdr/new-nace') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=annazdr/new-nace) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1001 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
MPR0/orca-2-7B-fine-tune-v01
MPR0
2024-02-08T00:39:12Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Orca-2-7b", "base_model:adapter:microsoft/Orca-2-7b", "region:us" ]
null
2024-02-07T19:54:46Z
--- library_name: peft base_model: microsoft/Orca-2-7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
rasyosef/bert-amharic-tokenizer
rasyosef
2024-02-08T00:31:39Z
0
2
transformers
[ "transformers", "am", "dataset:oscar", "dataset:mc4", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-02-08T00:10:43Z
--- license: mit datasets: - oscar - mc4 language: - am library_name: transformers --- # Amharic WordPiece Tokenizer This repo contains a **WordPiece** tokenizer trained on the **Amharic** subset of the [oscar](https://huggingface.co/datasets/oscar) and [mc4](https://huggingface.co/datasets/mc4) datasets. It's the same as the **BERT** tokenizer but trained from scratch on an amharic dataset with a vocabulary size of `30522`. # How to use You can load the tokenizer from huggingface hub as follows. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("rasyosef/bert-amharic-tokenizer") tokenizer.tokenize("የዓለምአቀፉ ነጻ ንግድ መስፋፋት ድህነትን ለማሸነፍ በሚደረገው ትግል አንዱ ጠቃሚ መሣሪያ ሊሆን መቻሉ ብዙ የሚነገርለት ጉዳይ ነው።") ``` Output: ```python ['የዓለም', '##አቀፉ', 'ነጻ', 'ንግድ', 'መስፋፋት', 'ድህነትን', 'ለማሸነፍ', 'በሚደረገው', 'ትግል', 'አንዱ', 'ጠቃሚ', 'መሣሪያ', 'ሊሆን', 'መቻሉ', 'ብዙ', 'የሚነገርለት', 'ጉዳይ', 'ነው', '።'] ```
Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context-GGUF
Epiculous
2024-02-08T00:28:38Z
43
6
transformers
[ "transformers", "gguf", "mergekit", "merge", "endpoints_compatible", "region:us", "conversational" ]
null
2024-02-07T20:37:41Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # Fett-uccine-Long-Noodle-7B-120k-Context This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details A merge with Fett-uccine and Mistral Yarn 120k ctx. Credit to Nitral for the merge script and idea. ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * Z:\ModelColdStorage\Yarn-Mistral-7b-128k * Z:\ModelColdStorage\Fett-uccine-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Z:\ModelColdStorage\Fett-uccine-7B layer_range: [0, 32] - model: Z:\ModelColdStorage\Yarn-Mistral-7b-128k layer_range: [0, 32] merge_method: slerp base_model: Z:\ModelColdStorage\Fett-uccine-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp
celik-muhammed
2024-02-08T00:21:13Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "tflite", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-02-08T00:11:24Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=celik-muhammed/multi-qa-mpnet-base-dot-v1-finetuned-dtc-zoomcamp) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 794 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 989 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1.2800000000000005e-10 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 80.0, "weight_decay": 0.1 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': True, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) (2): Dense({'in_features': 3072, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (3): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
askasok/PrayerPortal
askasok
2024-02-08T00:18:27Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-02-08T00:17:15Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chathuranga-jayanath/codet5-small-v20
chathuranga-jayanath
2024-02-08T00:13:28Z
16
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-06T16:55:07Z
dataset: chathuranga-jayanath/selfapr-manipulation-bug-error-context-all
Arman123/TinyLlama-1.1B-Chat-RU
Arman123
2024-02-07T23:46:22Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "llama", "trl", "sft", "generated_from_trainer", "text-generation", "conversational", "dataset:generator", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
text-generation
2024-02-05T21:39:52Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 model-index: - name: TinyLlama-1.1B-Chat-RU results: [] pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyLlama-1.1B-Chat-RU This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
arjan-hada/esm2_t12_35M_UR50D-finetuned-rep7868aav2-v0
arjan-hada
2024-02-07T23:39:53Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "esm", "text-classification", "generated_from_trainer", "base_model:facebook/esm2_t12_35M_UR50D", "base_model:finetune:facebook/esm2_t12_35M_UR50D", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-07T20:02:07Z
--- license: mit base_model: facebook/esm2_t12_35M_UR50D tags: - generated_from_trainer metrics: - spearmanr model-index: - name: esm2_t12_35M_UR50D-finetuned-rep7868aav2-v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # esm2_t12_35M_UR50D-finetuned-rep7868aav2-v0 This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0513 - Spearmanr: 0.7389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Spearmanr | |:-------------:|:-----:|:-----:|:---------------:|:---------:| | 0.118 | 1.0 | 1180 | 0.1154 | 0.3185 | | 0.1156 | 2.0 | 2360 | 0.1109 | 0.3383 | | 0.1143 | 3.0 | 3540 | 0.1162 | 0.3194 | | 0.1192 | 4.0 | 4720 | 0.1111 | 0.2974 | | 0.1147 | 5.0 | 5900 | 0.1125 | 0.4043 | | 0.1196 | 6.0 | 7080 | 0.1116 | 0.1580 | | 0.1171 | 7.0 | 8260 | 0.1114 | 0.2923 | | 0.1177 | 8.0 | 9440 | 0.1106 | 0.3592 | | 0.1126 | 9.0 | 10620 | 0.1105 | 0.3724 | | 0.1152 | 10.0 | 11800 | 0.1135 | 0.4947 | | 0.1159 | 11.0 | 12980 | 0.1082 | 0.5113 | | 0.0953 | 12.0 | 14160 | 0.0820 | 0.6096 | | 0.0798 | 13.0 | 15340 | 0.0688 | 0.6442 | | 0.074 | 14.0 | 16520 | 0.0710 | 0.6738 | | 0.0704 | 15.0 | 17700 | 0.0816 | 0.6736 | | 0.0678 | 16.0 | 18880 | 0.0596 | 0.7142 | | 0.0599 | 17.0 | 20060 | 0.0689 | 0.7187 | | 0.0568 | 18.0 | 21240 | 0.0566 | 0.7308 | | 0.0534 | 19.0 | 22420 | 0.0518 | 0.7340 | | 0.0522 | 20.0 | 23600 | 0.0513 | 0.7389 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
joislosinghermind/lola-gunvolt
joislosinghermind
2024-02-07T23:20:15Z
1
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:unlicense", "region:us" ]
text-to-image
2024-02-07T23:20:12Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: "UNICODE\0\02\0d\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0a\0n\0i\0m\0e\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0f\0a\0c\0e\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0l\0o\0l\0a\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0g\0r\0e\0e\0n\0_\0h\0a\0i\0r\0,\0 \0c\0i\0t\0y\0s\0c\0a\0p\0e\0,\0 \0f\0u\0l\0l\0_\0b\0o\0d\0y\0,\0 \0s\0o\0l\0o\0,\0 \0s\0o\0l\0o\0 \0f\0o\0c\0u\0s\0,\0 \0t\0-\0s\0h\0i\0r\0t\0,\0 \0 \0s\0h\0o\0r\0t\0s\0,\0 \0<\0l\0o\0r\0a\0:\0l\0o\0l\0a\0:\01\0>\0" output: url: images/00492-abyssorangemix3AOM3_aom3a1b_3939236143.jpeg base_model: runwayml/stable-diffusion-v1-5 instance_prompt: lola license: unlicense --- # lola-gunvolt <Gallery /> ## Trigger words You should use `lola` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/joislosinghermind/lola-gunvolt/tree/main) them in the Files & versions tab.
adriana98/whisper-large-v2-LORA-colab
adriana98
2024-02-07T22:37:45Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-07T20:17:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PotatoOff/HamSter-0.2
PotatoOff
2024-02-07T22:09:31Z
1,395
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-14T13:51:15Z
--- license: apache-2.0 language: - en --- <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>HamSter v0.2</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> <style> body { font-family: 'Quicksand', sans-serif; background-color: #1A202C; color: #F7FAFC; margin: 0; padding: 20px; font-size: 16px; } .container { width: 100%; margin: auto; background-color: #2D3748; padding: 20px; border-radius: 10px; box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2); } .header { display: flex; align-items: flex-start; gap: 20px; } .header h1 { font-size: 20px; color: #E2E8F0; } .header img { flex-shrink: 0; margin-left: 25%; width: 50%; max-width: 50%; border-radius: 15px; transition: filter 0.4s ease; } .header img:hover { filter: blur(2px); /* Apply a stronger blur on hover */ } .info { flex-grow: 1; background-color: #2D3748; color: #CBD5E0; font-family: 'Fira Code', 'JetBrains Mono', monospace; padding: 15px; border-radius: 10px; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.3); font-size: 14px; line-height: 1.7; overflow-x: auto; margin-top: 40px; border: 2px solid #4A90E2; transition: box-shadow 0.3s ease; position: relative; /* Ensure proper stacking */ } .info:hover { box-shadow: 0 4px 13px rgba(0, 0, 0, 0.6), 0 0 24px rgba(74, 144, 226, 0.6); } .info-img { width: 100%; /* Adjust width as per your layout needs */ max-width: 400px; /* Max width to ensure it doesn't get too large */ max-height: 100%; /* Adjust height proportionally */ border-radius: 10px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2); margin-left: 5%; /* Align to the right */ margin-right: 0%; /* Keep some space from the text */ display: block; /* Ensure it's properly block level for margins to work */ float: right; /* Keep it to the right */ } .button { display: inline-block; background-image: linear-gradient(145deg, #F96167 0%, #F0F2D7 100%); color: #F0F0F0; padding: 16px 24px; /* Increased padding for bigger buttons */ border: none; border-radius: 10px; cursor: pointer; text-decoration: none; margin-left: 7%; transition: transform 0.3s ease, box-shadow 0.3s ease, background-image 0.3s ease, color 0.3s ease, border-radius 0.3s ease; /* Enhanced transitions */ font-weight: bold; /* Make the text bold */ box-shadow: 0 2px 15px rgba(0, 0, 0, 0.2); /* Subtle shadow for depth */ } .button:hover { background-image: linear-gradient(145deg, #FB1A3E 0%, #F38555 100%); /* Vibrant to light pink gradient */ transform: scale(1.1); /* Increase size for more emphasis */ box-shadow: 0 10px 30px rgba(249, 97, 103, 0.8); /* More pronounced glowing effect */ color: #FFFFFF; /* Brighten the text color slightly */ border-radius: 15px; /* Soften the corners a bit more for a pill-like effect */ } @keyframes pulse { 0% { transform: scale(1); opacity: 1; } 50% { transform: scale(1.05); opacity: 0.85; } 100% { transform: scale(1); opacity: 1; } } </style> </head> <body> <div class="container"> <div class="header"> <div class="info" style="margin-top: 5px;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64e7616c7df33432812e3c57/PieKyxOEVyn0zrrNqVec_.webp" alt="Image"> <h1 class="product-name" style="margin: 10px">HamSter 0.2</h1> <p> 👋 Uncensored fine tune model roleplay focused of "mistralai/Mistral-7B-v0.2" with the help of my team <a href="https://huggingface.co/ConvexAI" target="_blank">ConvexAI.</a><br><br> 🚀 For optimal performance, I recommend using a detailed character card! (There is NSFW chub.ai) Check out <a href="https://chub.ai" target="_blank">Chub.ai</a> for some character cards.<br><br> 🤩 Uses the Llama2 prompt template with chat instructions.<br><br> 🔥 Fine-tuned with a newer dataset for even better results.<br><br> 😄 Next one will be more interesting!<br> </p> <div> <a href="https://huggingface.co/collections/PotatoOff/hamster-02-65abc987a92a64ef5bb13148" class="button">HamSter 0.2 Quants</a> <a href="https://discord.com/invite/9y7KxZxcZx" class="button">Discord Server</a> </div> </div> </div> <div style="overflow: hidden; position: relative"> <div class="info"style="overflow: hidden; margin:-left 0% margin-top: 20px;"> <a href="https://cdn-uploads.huggingface.co/production/uploads/64e7616c7df33432812e3c57/RnozajhXn85WQYuqcVtnA.webp" target="_blank"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64e7616c7df33432812e3c57/RnozajhXn85WQYuqcVtnA.webp" alt="Roleplay Test" style="width: auto; max-width: 37%; max-height: 100%; border-radius: 10px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2); margin-left: 0%; display: block; float: right;"> </a> <h2 style="margin-top: 0;">I had good results with these parameters:</h2> <ul style="margin-top: 0;"> <p>> temperature: 0.8 <</p> <p>> top_p: 0.75</p> <p>> min_p: 0</p> <p>> top_k: 0</p> <p>> repetition_penalty: 1.05</p> </ul> </div> </div> <div style="overflow: hidden; position: relative;"> <div class="info" style="overflow: hidden; margin-top: 20px;"> <h2 style="margin-top: 0;">BenchMarks on OpenLLM Leaderboard</h2> <a href="https://cdn-uploads.huggingface.co/production/uploads/64e7616c7df33432812e3c57/KaeVaaLOYZb0k81BbQ2-m.png" target="_blank"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64e7616c7df33432812e3c57/KaeVaaLOYZb0k81BbQ2-m.png" alt="OPEN LLM BENCHMARK" style="info-img; border-radius: 10px"> </a> <p>More details: <a href="https://huggingface.co/datasets/open-llm-leaderboard/details_PotatoOff__HamSter-0.2" target="_blank">HamSter-0.2 OpenLLM BenchMarks</a></p> </div> </div> <div style="overflow: hidden; position: relative;"> <div class="info" style="overflow: hidden; margin-top: 20px;"> <h2 style="margin-top: 0;">BenchMarks on Ayumi's LLM Role Play & ERP Ranking</h2> <a href="https://cdn-uploads.huggingface.co/production/uploads/64e7616c7df33432812e3c57/NSUmxUmDyhO9tJb-NZd8m.png" target="_blank"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64e7616c7df33432812e3c57/NSUmxUmDyhO9tJb-NZd8m.png" alt="Ayumi's LLM Role Play & ERP Ranking" class="info-img" style="width: 100%; height: auto;"> </a> <p>More details: <a href="http://ayumi.m8geil.de/results_v3/model_resp_DL_20240114_7B-Q6_K_HamSter_0.2.html">Ayumi's LLM RolePlay & ERP Rankin HamSter-0.2 GGUF version Q6_K</a></p> </div> </div> <div style="font-family: 'Arial', sans-serif; font-weight: bold; text-shadow: 0px 2px 4px rgba(0, 0, 0, 0.5);"> <p style="display: inline; font-size: 17px; margin: 0;">Have Fun</p> <p style="display: inline; color: #E2E8F0; margin-bottom: 20px; animation: pulse 2s infinite; font-size: 17px;">💖</p> </div> </div> </body> </html>
ORromu/Reinforce-CartPole-v1
ORromu
2024-02-07T22:01:43Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T22:01:34Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
VanoInvestigations/bertin-gpt-j-6B_4bit_27
VanoInvestigations
2024-02-07T21:44:17Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:bertin-project/bertin-gpt-j-6B", "base_model:adapter:bertin-project/bertin-gpt-j-6B", "license:apache-2.0", "region:us" ]
null
2024-02-02T12:04:19Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: bertin-project/bertin-gpt-j-6B model-index: - name: bertin-gpt-j-6B_4bit_27 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertin-gpt-j-6B_4bit_27 This model is a fine-tuned version of [bertin-project/bertin-gpt-j-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.14.6 - Tokenizers 0.15.1
hanspeterlyngsoeraaschoujensen/deepseek-math-7b-instruct-GPTQ
hanspeterlyngsoeraaschoujensen
2024-02-07T21:18:11Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-02-07T21:16:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jimmyhd/mistral7btimebookFinetune50rows
Jimmyhd
2024-02-07T21:13:25Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T21:04:28Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
andrewatef/MoMask-test
andrewatef
2024-02-07T21:12:35Z
0
0
null
[ "arxiv:2312.00063", "region:us" ]
null
2024-02-07T13:33:10Z
--- title: MoMask emoji: 🎭 colorFrom: pink colorTo: purple sdk: gradio sdk_version: 3.24.1 app_file: app.py pinned: false --- # MoMask: Generative Masked Modeling of 3D Human Motions ## [[Project Page]](https://ericguo5513.github.io/momask) [[Paper]](https://arxiv.org/abs/2312.00063) ![teaser_image](https://ericguo5513.github.io/momask/static/images/teaser.png) If you find our code or paper helpful, please consider citing: ``` @article{guo2023momask, title={MoMask: Generative Masked Modeling of 3D Human Motions}, author={Chuan Guo and Yuxuan Mu and Muhammad Gohar Javed and Sen Wang and Li Cheng}, year={2023}, eprint={2312.00063}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## :postbox: News 📢 **2023-12-19** --- Release scripts for temporal inpainting. 📢 **2023-12-15** --- Release codes and models for momask. Including training/eval/generation scripts. 📢 **2023-11-29** --- Initialized the webpage and git project. ## :round_pushpin: Get You Ready <details> ### 1. Conda Environment ``` conda env create -f environment.yml conda activate momask pip install git+https://github.com/openai/CLIP.git ``` We test our code on Python 3.7.13 and PyTorch 1.7.1 ### 2. Models and Dependencies #### Download Pre-trained Models ``` bash prepare/download_models.sh ``` #### Download Evaluation Models and Gloves For evaluation only. ``` bash prepare/download_evaluator.sh bash prepare/download_glove.sh ``` #### Troubleshooting To address the download error related to gdown: "Cannot retrieve the public link of the file. You may need to change the permission to 'Anyone with the link', or have had many accesses". A potential solution is to run `pip install --upgrade --no-cache-dir gdown`, as suggested on https://github.com/wkentaro/gdown/issues/43. This should help resolve the issue. #### (Optional) Download Mannually Visit [[Google Drive]](https://drive.google.com/drive/folders/1b3GnAbERH8jAoO5mdWgZhyxHB73n23sK?usp=drive_link) to download the models and evaluators mannually. ### 3. Get Data You have two options here: * **Skip getting data**, if you just want to generate motions using *own* descriptions. * **Get full data**, if you want to *re-train* and *evaluate* the model. **(a). Full data (text + motion)** **HumanML3D** - Follow the instruction in [HumanML3D](https://github.com/EricGuo5513/HumanML3D.git), then copy the result dataset to our repository: ``` cp -r ../HumanML3D/HumanML3D ./dataset/HumanML3D ``` **KIT**-Download from [HumanML3D](https://github.com/EricGuo5513/HumanML3D.git), then place result in `./dataset/KIT-ML` #### </details> ## :rocket: Demo <details> ### (a) Generate from a single prompt ``` python gen_t2m.py --gpu_id 1 --ext exp1 --text_prompt "A person is running on a treadmill." ``` ### (b) Generate from a prompt file An example of prompt file is given in `./assets/text_prompt.txt`. Please follow the format of `<text description>#<motion length>` at each line. Motion length indicates the number of poses, which must be integeter and will be rounded by 4. In our work, motion is in 20 fps. If you write `<text description>#NA`, our model will determine a length. Note once there is **one** NA, all the others will be **NA** automatically. ``` python gen_t2m.py --gpu_id 1 --ext exp2 --text_path ./assets/text_prompt.txt ``` A few more parameters you may be interested: * `--repeat_times`: number of replications for generation, default `1`. * `--motion_length`: specify the number of poses for generation, only applicable in (a). The output files are stored under folder `./generation/<ext>/`. They are * `numpy files`: generated motions with shape of (nframe, 22, 3), under subfolder `./joints`. * `video files`: stick figure animation in mp4 format, under subfolder `./animation`. * `bvh files`: bvh files of the generated motion, under subfolder `./animation`. We also apply naive foot ik to the generated motions, see files with suffix `_ik`. It sometimes works well, but sometimes will fail. </details> ## :dancers: Visualization <details> All the animations are manually rendered in blender. We use the characters from [mixamo](https://www.mixamo.com/#/). You need to download the characters in T-Pose with skeleton. ### Retargeting For retargeting, we found rokoko usually leads to large error on foot. On the other hand, [keemap.rig.transfer](https://github.com/nkeeline/Keemap-Blender-Rig-ReTargeting-Addon/releases) shows more precise retargetting. You could watch the [tutorial](https://www.youtube.com/watch?v=EG-VCMkVpxg) here. Following these steps: * Download keemap.rig.transfer from the github, and install it in blender. * Import both the motion files (.bvh) and character files (.fbx) in blender. * `Shift + Select` the both source and target skeleton. (Do not need to be Rest Position) * Switch to `Pose Mode`, then unfold the `KeeMapRig` tool at the top-right corner of the view window. * Load and read the bone mapping file `./assets/mapping.json`(or `mapping6.json` if it doesn't work). This file is manually made by us. It works for most characters in mixamo. You could make your own. * Adjust the `Number of Samples`, `Source Rig`, `Destination Rig Name`. * Clik `Transfer Animation from Source Destination`, wait a few seconds. We didn't tried other retargetting tools. Welcome to comment if you find others are more useful. ### Scene We use this [scene](https://drive.google.com/file/d/1lg62nugD7RTAIz0Q_YP2iZsxpUzzOkT1/view?usp=sharing) for animation. </details> ## :clapper: Temporal Inpainting <details> We conduct mask-based editing in the m-transformer stage, followed by the regeneration of residual tokens for the entire sequence. To load your own motion, provide the path through `--source_motion`. Utilize `-msec` to specify the mask section, supporting either ratio or frame index. For instance, `-msec 0.3,0.6` with `max_motion_length=196` is equivalent to `-msec 59,118`, indicating the editing of the frame section [59, 118]. ``` python edit_t2m.py --gpu_id 1 --ext exp3 --use_res_model -msec 0.4,0.7 --text_prompt "A man picks something from the ground using his right hand." ``` Note: Presently, the source motion must adhere to the format of a HumanML3D dim-263 feature vector. An example motion vector data from the HumanML3D test set is available in `example_data/000612.npy`. To process your own motion data, you can utilize the `process_file` function from `utils/motion_process.py`. </details> ## :space_invader: Train Your Own Models <details> **Note**: You have to train RVQ **BEFORE** training masked/residual transformers. The latter two can be trained simultaneously. ### Train RVQ ``` python train_vq.py --name rvq_name --gpu_id 1 --dataset_name t2m --batch_size 512 --num_quantizers 6 --max_epoch 500 --quantize_drop_prob 0.2 ``` ### Train Masked Transformer ``` python train_t2m_transformer.py --name mtrans_name --gpu_id 2 --dataset_name t2m --batch_size 64 --vq_name rvq_name ``` ### Train Residual Transformer ``` python train_res_transformer.py --name rtrans_name --gpu_id 2 --dataset_name t2m --batch_size 64 --vq_name rvq_name --cond_drop_prob 0.2 --share_weight ``` * `--dataset_name`: motion dataset, `t2m` for HumanML3D and `kit` for KIT-ML. * `--name`: name your model. This will create to model space as `./checkpoints/<dataset_name>/<name>` * `--gpu_id`: GPU id. * `--batch_size`: we use `512` for rvq training. For masked/residual transformer, we use `64` on HumanML3D and `16` for KIT-ML. * `--num_quantizers`: number of quantization layers, `6` is used in our case. * `--quantize_drop_prob`: quantization dropout ratio, `0.2` is used. * `--vq_name`: when training masked/residual transformer, you need to specify the name of rvq model for tokenization. * `--cond_drop_prob`: condition drop ratio, for classifier-free guidance. `0.2` is used. * `--share_weight`: whether to share the projection/embedding weights in residual transformer. All the pre-trained models and intermediate results will be saved in space `./checkpoints/<dataset_name>/<name>`. </details> ## :book: Evaluation <details> ### Evaluate RVQ Reconstruction: HumanML3D: ``` python eval_t2m_vq.py --gpu_id 0 --name rvq_nq6_dc512_nc512_noshare_qdp0.2 --dataset_name t2m --ext rvq_nq6 ``` KIT-ML: ``` python eval_t2m_vq.py --gpu_id 0 --name rvq_nq6_dc512_nc512_noshare_qdp0.2_k --dataset_name kit --ext rvq_nq6 ``` ### Evaluate Text2motion Generation: HumanML3D: ``` python eval_t2m_trans_res.py --res_name tres_nlayer8_ld384_ff1024_rvq6ns_cdp0.2_sw --dataset_name t2m --name t2m_nlayer8_nhead6_ld384_ff1024_cdp0.1_rvq6ns --gpu_id 1 --cond_scale 4 --time_steps 10 --ext evaluation ``` KIT-ML: ``` python eval_t2m_trans_res.py --res_name tres_nlayer8_ld384_ff1024_rvq6ns_cdp0.2_sw_k --dataset_name kit --name t2m_nlayer8_nhead6_ld384_ff1024_cdp0.1_rvq6ns_k --gpu_id 0 --cond_scale 2 --time_steps 10 --ext evaluation ``` * `--res_name`: model name of `residual transformer`. * `--name`: model name of `masked transformer`. * `--cond_scale`: scale of classifer-free guidance. * `--time_steps`: number of iterations for inference. * `--ext`: filename for saving evaluation results. The final evaluation results will be saved in `./checkpoints/<dataset_name>/<name>/eval/<ext>.log` </details> ## Acknowlegements We sincerely thank the open-sourcing of these works where our code is based on: [deep-motion-editing](https://github.com/DeepMotionEditing/deep-motion-editing), [Muse](https://github.com/lucidrains/muse-maskgit-pytorch), [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch), [T2M-GPT](https://github.com/Mael-zys/T2M-GPT), [MDM](https://github.com/GuyTevet/motion-diffusion-model/tree/main) and [MLD](https://github.com/ChenFengYe/motion-latent-diffusion/tree/main) ## License This code is distributed under an [MIT LICENSE](https://github.com/EricGuo5513/momask-codes/tree/main?tab=MIT-1-ov-file#readme). Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, and uses datasets which each have their own respective licenses that must also be followed.
gayanin/bart-noised-with-gcd-dist-0.5
gayanin
2024-02-07T21:08:59Z
3
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T19:03:31Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.5 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gayanin/bart-noised-with-gcd-dist-0.4
gayanin
2024-02-07T21:08:50Z
23
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T19:03:27Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.4 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gayanin/bart-noised-with-gcd-dist-0.3
gayanin
2024-02-07T21:08:46Z
3
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T17:29:08Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.3 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gayanin/bart-noised-with-gcd-dist-0.2
gayanin
2024-02-07T21:08:37Z
10
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T17:28:55Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.2 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
gayanin/bart-noised-with-gcd-dist-0.1
gayanin
2024-02-07T21:08:27Z
3
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-07T17:28:08Z
--- license: apache-2.0 base_model: facebook/bart-base tags: - generated_from_trainer model-index: - name: bart-noised-with-gcd-dist-0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-noised-with-gcd-dist-0.1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
danaleee/Long_rank10_iter500_valprompt
danaleee
2024-02-07T21:07:20Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-07T18:44:33Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks rc_car tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - danaleee/Long_rank10_iter500_valprompt These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks rc_car using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
ClementeH/faisan-7b-instruct
ClementeH
2024-02-07T20:58:54Z
3
0
peft
[ "peft", "region:us" ]
null
2024-02-07T20:44:10Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0
seedboxai/KafkaLM-7B-German-V0.1
seedboxai
2024-02-07T20:47:53Z
14
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "deutsch", "german", "seedbox", "conversational", "de", "dataset:seedboxai/multitask_german_examples_32k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-29T21:12:17Z
--- language: - de license: apache-2.0 library_name: transformers tags: - deutsch - german - seedbox - mistral datasets: - seedboxai/multitask_german_examples_32k pipeline_tag: text-generation --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645ded34a45b4182d7f5c385/oh7yRzqtRlDtdu8sJoAdV.jpeg) # KafkaLM-7B-German-V0.1 **KafkaLM 7b** is a Mistral 7b model - further pre-trained on a large German dataset from Björn Plüster and LAION. [leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) - which was finetuned on an ensemble of popular high-quality open-source instruction sets (translated from English to German). KafkaLM 7b is a [Seedbox](https://huggingface.co/seedboxai) project trained by [Dennis Dickmann](https://huggingface.co/doubledsbv). **Why Kafka?** The models are proficient, yet creative, and have some tendencies to linguistically push boundaries 😊 ## Model Details The purpose of releasing the **KafkaLM series** is to contribute to the German AI community with a set of fine-tuned LLMs that are easy to use in everyday applications across a variety of tasks. The main goal was to provide LLMs proficient in German, especially to be used in German-speaking business contexts where English alone is not sufficient. ### Dataset I used a 8k filtered version of the following [seedboxai/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxai/multitask_german_examples_32k) ### Prompt Format This model follows the subsequent prompt format: ``` <|system|> Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen.</s> <|user|> Welche Möglichkeiten der energetischen Sanierung habe ich neben Solar und Energiespeicher?</s> <|assistant|> ``` ### Inference Getting started with the model is straightforward ```python import transformers model_id = "seedboxai/KafkaLM-7B-German-V0.1" model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.padding_side = "right" tokenizer.pad_token = tokenizer.unk_token tokenizer.add_eos_token = False def generate_prompt(input): prompt = '' sys_prompt = "Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen." prompt += f"<|system|>\n{sys_prompt.strip()}</s>\n" prompt += f"<|user|>\n{input.strip()}</s>\n" prompt += f"<|assistant|>\n" return prompt.strip() def evaluate( input, temperature=0.7, top_p=0.95, top_k=50, num_beams=3, max_new_tokens=512, #max_length=8192, **kwargs, ): prompt = generate_prompt(input) #print(prompt) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to(device) attention_mask=inputs["attention_mask"].to(device) generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, no_repeat_ngram_size=3, do_sample=True, **kwargs, ) with torch.no_grad(): generation_output = model.generate( early_stopping=False, #eos_token_id=tokenizer.eos_token_id, #pad_token_id=tokenizer.pad_token_id, input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, #max_length= max_length ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output #.split("<|assistant|>")[1].strip() print(evaluate("Wer ist eigentlich dieser Kafka?")) ``` ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply.
Pouria88/K
Pouria88
2024-02-07T20:40:49Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-07T20:40:49Z
--- license: creativeml-openrail-m ---
micoff/bert-finetuned-ner
micoff
2024-02-07T20:37:28Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-02-07T19:57:53Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9358147229114971 - name: Recall type: recall value: 0.9520363513968361 - name: F1 type: f1 value: 0.9438558438308168 - name: Accuracy type: accuracy value: 0.987048919762171 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0579 - Precision: 0.9358 - Recall: 0.9520 - F1: 0.9439 - Accuracy: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0839 | 1.0 | 1756 | 0.0625 | 0.9193 | 0.9377 | 0.9284 | 0.9838 | | 0.0426 | 2.0 | 3512 | 0.0557 | 0.9309 | 0.9498 | 0.9403 | 0.9864 | | 0.0192 | 3.0 | 5268 | 0.0579 | 0.9358 | 0.9520 | 0.9439 | 0.9870 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
AbhiKrov/mt5-small-english-to-hindi-akrov
AbhiKrov
2024-02-07T20:32:42Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-05T21:04:57Z
--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer metrics: - bleu model-index: - name: mt5-small-english-to-hindi-akrov results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-english-to-hindi-akrov This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Bleu: 0.0 - Gen Len: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | No log | 1.0 | 26 | nan | 0.0 | 0.0 | | No log | 2.0 | 52 | nan | 0.0 | 0.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
DrishtiSharma/phi2-english-to-hinglish-translation-merged
DrishtiSharma
2024-02-07T20:25:55Z
5
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-07T20:25:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tavalenzuelag/mistral-7b-e2e-mod-2
tavalenzuelag
2024-02-07T20:25:23Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-07T19:49:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
devlocalhost/hi-tinylama-gguf-16bit
devlocalhost
2024-02-07T20:23:32Z
41
1
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:quantized:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-07T20:21:54Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** devlocalhost - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Fukurokun/MemGPT-DPO-uncensored-6.0bpw-exl2
Fukurokun
2024-02-07T20:23:20Z
5
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "MemGPT", "function", "function calling", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-06T13:59:25Z
--- library_name: transformers license: apache-2.0 language: - en tags: - MemGPT - function - function calling --- # MemGPT DPO uncensored 6.0bpw exl2 - Model creator: [Starlette!](https://huggingface.co/starsnatched) - Original model: [MemGPT-DPO-uncensored](https://huggingface.co/starsnatched/MemGPT-DPO-uncensored) This is an quantized, uncensored release of DPO version of a Language Model, intended to be used with [MemGPT](https://github.com/cpacker/MemGPT). # WARNING This model is **UNCENSORED**. That means this model is highly compliant to any requests, even unethical and potentially dangerous ones. I do not take any responsibility whatsoever for any damage caused by the model in this repo. # Model Description This repository contains an uncensored, finetuned model of [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for operating within function calling environment in MemGPT. It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT. # Key Features * Function calling * Dedicated to working with MemGPT * Supports medium-length context, up to sequences of 8,192 # Prompt Format This model uses **ChatML** prompt format: ``` <|im_start|>system {system_instruction}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> ``` # Usage This model is designed to be ran on multiple backends, such as [oogabooga's textgen WebUI](https://github.com/oobabooga/text-generation-webui). Simply install your preferred backend, and then load up this model. Then, configure MemGPT using `memgpt configure`, and chat with MemGPT via `memgpt run` command! # Model Details * Developed by: @starsnatched * Model type: This repo contains a language model based on the transformer decoder architecture. * Language: English * Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched. # Training Infrastructure * Hardware: The model in this repo was trained on 2x A100 80GB GPUs. # Intended Use The model is designed to be used as the base model for MemGPT agents. # Limitations and Risks The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce.
Kowshik24/BanglaLM
Kowshik24
2024-02-07T20:19:20Z
0
0
null
[ "text-generation", "license:apache-2.0", "region:us" ]
text-generation
2024-02-07T19:34:39Z
--- license: apache-2.0 pipeline_tag: text-generation --- # Bigram Language Model ## Overview This repository contains a simple Bigram Language Model implemented in PyTorch. The model is trained to predict the next character in a sequence, given the current character. It's a character-level model and can be used for tasks like text generation. ## Model Details - **Model Type**: Character-level Language Model - **Architecture**: Simple lookup table for character bigrams - **Training Data**: [https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/bengali] ## Requirements - Python 3.x - PyTorch - JSON (for loading the tokenizer) ## Installation First, clone this repository: ## Loading the Model To load the model, you need to initialize it with the vocabulary size and load the pre-trained weights: ```python import torch from model import BigramLanguageModel vocab_size = 225 model = BigramLanguageModel(vocab_size) model.load_state_dict(torch.load('path_to_your_model.pth', map_location=torch.device('cpu'))) model.eval() import json with open('tokenizer_mappings.json', 'r', encoding='utf-8') as f: mappings = json.load(f) stoi = mappings['stoi'] itos = mappings['itos'] # Example usage encode = lambda s: [stoi[c] for c in s] decode = lambda l: ''.join([itos[i] for i in l]) context = torch.tensor([encode("Your initial text")], dtype=torch.long) generated_text_indices = model.generate(context, max_new_tokens=100) print(decode(generated_text_indices[0].tolist()))
devlocalhost/hi-tinylama
devlocalhost
2024-02-07T20:16:52Z
10
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/tinyllama-bnb-4bit", "base_model:finetune:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T20:15:09Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-bnb-4bit --- # Uploaded model - **Developed by:** devlocalhost - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OscarGalavizC/ppo-LunarLander-v2
OscarGalavizC
2024-02-07T20:15:45Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T20:15:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.08 +/- 36.11 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
lulygavri/roberta-pol
lulygavri
2024-02-07T20:05:10Z
48
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "base_model:PlanTL-GOB-ES/roberta-base-bne", "base_model:finetune:PlanTL-GOB-ES/roberta-base-bne", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-06T17:09:50Z
--- license: apache-2.0 base_model: PlanTL-GOB-ES/roberta-base-bne tags: - generated_from_keras_callback model-index: - name: lulygavri/roberta-pol results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # lulygavri/roberta-pol This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0734 - Validation Loss: 0.1397 - Train Accuracy: 0.9515 - Train Precision: [0.61956854 0.99608521 0.83460292] - Train Precision W: 0.9635 - Train Recall: [0.97308663 0.94779554 0.97394503] - Train Recall W: 0.9515 - Train F1: [0.75709278 0.97134057 0.89890607] - Train F1 W: 0.9548 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11994, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 500, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch | |:----------:|:---------------:|:--------------:|:----------------------------------:|:-----------------:|:----------------------------------:|:--------------:|:----------------------------------:|:----------:|:-----:| | 0.0734 | 0.1397 | 0.9515 | [0.61956854 0.99608521 0.83460292] | 0.9635 | [0.97308663 0.94779554 0.97394503] | 0.9515 | [0.75709278 0.97134057 0.89890607] | 0.9548 | 1 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
Poliuszko/ppo-LunarLander-v21-1
Poliuszko
2024-02-07T20:03:41Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T17:16:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 275.40 +/- 22.27 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
iahlt/xlm-roberta-base-ar-ner-flat
iahlt
2024-02-07T19:54:32Z
20
0
span-marker
[ "span-marker", "safetensors", "token-classification", "ner", "named-entity-recognition", "generated_from_span_marker_trainer", "ar", "region:us" ]
token-classification
2024-01-27T18:09:53Z
--- library_name: span-marker tags: - span-marker - token-classification - ner - named-entity-recognition - generated_from_span_marker_trainer metrics: - precision - recall - f1 widget: [] pipeline_tag: token-classification language: - ar --- # SpanMarker This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. ## Model Details Details are here - https://iahlt.github.io/arabic_ner/ ### Model Description - **Model Type:** SpanMarker <!-- - **Encoder:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Maximum Entity Length:** 150 words <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Tags ``` ANG - Any named language (Hebrew, Arabic, English, French, etc.) DUC - A branded product, objects, vehicles, medicines, foods, etc. (Apple, BMW, Coca-Cola, etc.) EVE - Any named event (Olympics, World Cup, etc.) FAC - Any named facility, building, airport, etc. (Eiffel Tower, Ben Gurion Airport, etc.) GPE - Geo-political entity, nation states, counties, cities, etc. INFORMAL - Informal language (slang) LOC - Non-GPE locations, geographical regions, mountain ranges, bodies of water, etc. ORG - Companies, agencies, institutions, political parties, etc. PER - People, including fictional. TIMEX - Time expression, absolute or relative dates or periods. TTL - Any named title, position, profession, etc. (President, Prime Minister, etc.) WOA - Any named work of art (books, movies, songs, etc.) MISC - Miscellaneous entities, that do not belong to the previous categories ``` ## Uses ### Direct Use for Inference ```python from span_marker import SpanMarkerModel # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("iahlt/xlm-roberta-base-ar-ner-flat") entities = model.predict(<text>) print(entities) ``` ## Training Details ### Framework Versions - Python: 3.10.12 - SpanMarker: 1.5.0 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.1 ## Citation ### BibTeX ``` @software{Aarsen_SpanMarker, author = {Aarsen, Tom}, license = {Apache-2.0}, title = {{SpanMarker for Named Entity Recognition}}, url = {https://github.com/tomaarsen/SpanMarkerNER} } ```
llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1
llm-jp
2024-02-07T19:49:25Z
151
2
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "en", "ja", "dataset:databricks/databricks-dolly-15k", "dataset:llm-jp/databricks-dolly-15k-ja", "dataset:llm-jp/oasst1-21k-en", "dataset:llm-jp/oasst1-21k-ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-29T12:52:30Z
--- license: apache-2.0 language: - en - ja programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript library_name: transformers pipeline_tag: text-generation inference: false datasets: - databricks/databricks-dolly-15k - llm-jp/databricks-dolly-15k-ja - llm-jp/oasst1-21k-en - llm-jp/oasst1-21k-ja --- # llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1 This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan. | Model Variant | | :--- | |**Instruction models ver1.1**| | [llm-jp-13b-dpo-lora-hh_rlhf_ja-v1.1](https://huggingface.co/llm-jp/llm-jp-13b-dpo-lora-hh_rlhf_ja-v1.1)| | [llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1) | | [llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1) | |**Instruction models ver1.0**| | [llm-jp-13b-instruct-full-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0) | | [llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0) | | [llm-jp-13b-instruct-full-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-oasst-v1.0) | | [llm-jp-13b-instruct-lora-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-v1.0) | | [llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0) | | [llm-jp-13b-instruct-lora-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-dolly-oasst-v1.0) | | | | :--- | |**Pre-trained models**| | [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) | | [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) | Checkpoints format: Hugging Face Transformers (Megatron-DeepSpeed format models are available [here](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt)) ## Required Libraries and Their Versions - torch>=2.0.0 - transformers>=4.34.0 - tokenizers>=0.14.0 - accelerate==0.23.0 ## Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1") model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly_en-dolly_ja-ichikara_003_001-oasst_en-oasst_ja-v1.1", device_map="auto", torch_dtype=torch.float16) text = "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n\n### 指示:\n{instruction}\n\n### 応答:\n".format(instruction="自然言語処理とは何か") tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device) with torch.no_grad(): output = model.generate( tokenized_input, max_new_tokens=512, do_sample=True, top_p=0.95, temperature=0.7, repetition_penalty=1.1, )[0] print(tokenizer.decode(output)) ``` ## Model Details - **Model type:** Transformer-based Language Model - **Total seen tokens:** 300B |Model|Params|Layers|Hidden size|Heads|Context length| |:---:|:---:|:---:|:---:|:---:|:---:| |13b model|13b|40|5120|40|2048| |1.3b model|1.3b|24|2048|16|2048| ## Training - **Pre-training:** - **Hardware:** 96 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/)) - **Software:** Megatron-DeepSpeed - **Instruction tuning:** - **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/)) - **Software:** [TRL](https://github.com/huggingface/trl), [PEFT](https://github.com/huggingface/peft), and [DeepSpeed](https://github.com/microsoft/DeepSpeed) ## Tokenizer The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model. The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1). Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure. - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0` - **Training algorithm:** SentencePiece Unigram byte-fallback - **Training data:** A subset of the datasets for model pre-training - **Vocabulary size:** 50,570 (mixed vocabulary of Japanese, English, and source code) ## Datasets ### Pre-training The models have been pre-trained using a blend of the following datasets. | Language | Dataset | Tokens| |:---:|:---:|:---:| |Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.5B ||[mC4](https://huggingface.co/datasets/mc4)|136B |English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|5B ||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens. We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data. ### Instruction tuning The models have been fine-tuned on the following datasets. | Language | Dataset | description | |:---|:---:|:---:| |Japanese|[jaster](https://github.com/llm-jp/llm-jp-eval)| An automatically transformed data from the existing Japanese NLP datasets | |English|[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)| - | |Japanese|[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| A translated one by DeepL in LLM-jp | |English|[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| English subset of [oasst1 dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) | |Japanese|[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A translated one by DeepL in LLM-jp | |Japanese|[ichikara_003_001](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/)| ichikara-instruction dataset (ver.003-001) |Japanese|[hh-rlhf-12k-ja](https://huggingface.co/datasets/llm-jp/hh-rlhf-12k-ja)| A translated one by DeepL in LLM-jp | ## Evaluation You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) for the evaluation. ## Risks and Limitations The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Send Questions to llm-jp(at)nii.ac.jp ## License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Model Card Authors *The names are listed in alphabetical order.* Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takashi Kodama, Takumi Okamoto.
DNALLE/ddhteste
DNALLE
2024-02-07T19:49:12Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-02-07T19:48:34Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Geerath/distilbert-base-uncased-distilled-squad
Geerath
2024-02-07T19:49:11Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-02-07T18:25:28Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-distilled-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1892 ## Model description The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. ## Results are my own reproduction of the development by Hugging Face. ## How to Get Started with the Model Use the code below: from transformers import pipeline question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad') context = r""" Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script. """ result = question_answerer(question="What is a good example of a question answering dataset?", context=context) print( f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}" # Here is how to use this model in PyTorch: from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad') model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) answer_start_index = torch.argmax(outputs.start_logits) answer_end_index = torch.argmax(outputs.end_logits) predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) # And in TensorFlow: from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering import tensorflow as tf tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad") model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased-distilled-squad") question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" inputs = tokenizer(question, text, return_tensors="tf") outputs = model(**inputs) answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) ## Uses: This model can be used for question answering. ## Intended uses & limitations CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes. ## Training and evaluation data This model reaches a F1 score of 82.75539002485876 and 'exact_match': 73.66130558183538 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).d ## Training procedure Preprocessing See the distilbert-base-uncased model card for further details. Pretraining See the distilbert-base-uncased model card for further details. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2559 | 1.0 | 5533 | 1.1892 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
jashanno/ppo-LunarLander-v2
jashanno
2024-02-07T19:42:38Z
0
1
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T19:42:20Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 239.67 +/- 16.27 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
prarthana878/my-pet-dog
prarthana878
2024-02-07T19:35:10Z
1
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-07T19:30:41Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My--Pet-Dog Dreambooth model trained by prarthana878 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 4jk21cs044 Sample pictures of this concept: ![0](https://huggingface.co/prarthana878/my-pet-dog/resolve/main/sample_images/xzg_(1).jpeg.jpg) ![1](https://huggingface.co/prarthana878/my-pet-dog/resolve/main/sample_images/xzg.jpg) ![2](https://huggingface.co/prarthana878/my-pet-dog/resolve/main/sample_images/xzg_(2).jpg) ![3](https://huggingface.co/prarthana878/my-pet-dog/resolve/main/sample_images/xzg_(4).jpg) ![4](https://huggingface.co/prarthana878/my-pet-dog/resolve/main/sample_images/xzg_(3).jpg)
arryuann/medical-text-ft
arryuann
2024-02-07T19:24:50Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-07T19:21:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kbalde/code-llama-7b-text-to-sql
kbalde
2024-02-07T19:23:23Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-02-07T19:07:57Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: codellama/CodeLlama-7b-hf model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
SolaireOfTheSun/Llama-2-7b-chat-hf-sharded-bf16-feinabgestimmt-adapters-2
SolaireOfTheSun
2024-02-07T19:21:01Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-02-07T17:27:54Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
JiajingChen/a
JiajingChen
2024-02-07T19:16:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T09:43:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.16 +/- 20.85 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MichalGas/vit-base-patch16-224-in21k-finetuned-mgasior-07-02-2024
MichalGas
2024-02-07T19:03:30Z
5
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-07T17:22:14Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - f1 model-index: - name: vit-base-patch16-224-in21k-finetuned-mgasior-07-02-2024 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: F1 type: f1 value: 0.7716535433070866 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-mgasior-07-02-2024 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8842 - F1: 0.7717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.731 | 0.98 | 35 | 1.6748 | 0.3386 | | 1.5196 | 1.99 | 71 | 1.4890 | 0.4173 | | 1.3727 | 2.99 | 107 | 1.2938 | 0.5276 | | 1.2194 | 4.0 | 143 | 1.1519 | 0.6457 | | 1.1538 | 4.98 | 178 | 1.0544 | 0.6693 | | 1.0379 | 5.99 | 214 | 0.9852 | 0.7165 | | 1.0232 | 6.99 | 250 | 0.9439 | 0.7323 | | 0.9586 | 8.0 | 286 | 0.9136 | 0.7480 | | 0.9374 | 8.98 | 321 | 0.8946 | 0.7638 | | 0.96 | 9.79 | 350 | 0.8842 | 0.7717 | ### Framework versions - Transformers 4.36.1 - Pytorch 2.1.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
CLMBR/existential-there-quantifier-lstm-3
CLMBR
2024-02-07T18:51:35Z
6
0
transformers
[ "transformers", "pytorch", "rnn", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-02-02T10:12:54Z
--- tags: - generated_from_trainer model-index: - name: existential-there-quantifier-lstm-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # existential-there-quantifier-lstm-3 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9719 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.787 | 0.03 | 76320 | 4.7500 | | 4.5023 | 1.03 | 152640 | 4.4726 | | 4.3574 | 0.03 | 228960 | 4.3385 | | 4.2706 | 1.03 | 305280 | 4.2563 | | 4.2067 | 0.03 | 381600 | 4.2004 | | 4.1598 | 1.03 | 457920 | 4.1593 | | 4.1222 | 0.03 | 534240 | 4.1286 | | 4.0884 | 1.03 | 610560 | 4.1044 | | 4.06 | 0.03 | 686880 | 4.0853 | | 4.0349 | 1.03 | 763200 | 4.0699 | | 4.0153 | 0.03 | 839520 | 4.0564 | | 3.9999 | 1.03 | 915840 | 4.0454 | | 3.9841 | 0.03 | 992160 | 4.0365 | | 3.9663 | 1.03 | 1068480 | 4.0289 | | 3.9576 | 0.03 | 1144800 | 4.0222 | | 3.9434 | 1.03 | 1221120 | 4.0164 | | 3.9331 | 0.03 | 1297440 | 4.0107 | | 3.9224 | 1.03 | 1373760 | 4.0063 | | 3.9117 | 0.03 | 1450080 | 4.0029 | | 3.9068 | 1.03 | 1526400 | 3.9988 | | 3.9024 | 0.03 | 1602720 | 3.9966 | | 3.8961 | 1.03 | 1679040 | 3.9935 | | 3.8922 | 0.03 | 1755360 | 3.9913 | | 3.8832 | 0.03 | 1831680 | 3.9888 | | 3.876 | 1.03 | 1908000 | 3.9861 | | 3.8682 | 0.03 | 1984320 | 3.9844 | | 3.8638 | 1.03 | 2060640 | 3.9831 | | 3.8615 | 0.03 | 2136960 | 3.9816 | | 3.8567 | 1.03 | 2213280 | 3.9804 | | 3.85 | 0.03 | 2289600 | 3.9793 | | 3.8483 | 1.03 | 2365920 | 3.9779 | | 3.8467 | 0.03 | 2442240 | 3.9766 | | 3.8417 | 0.03 | 2518560 | 3.9756 | | 3.8381 | 1.03 | 2594880 | 3.9749 | | 3.8329 | 0.03 | 2671200 | 3.9740 | | 3.8346 | 1.03 | 2747520 | 3.9735 | | 3.8324 | 0.03 | 2823840 | 3.9729 | | 3.8333 | 1.03 | 2900160 | 3.9724 | | 3.8332 | 0.03 | 2976480 | 3.9720 | | 3.8296 | 1.02 | 3052726 | 3.9719 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
shapermindai/pygmalion-free
shapermindai
2024-02-07T18:43:30Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-generation", "text generation", "conversational", "en", "license:agpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T13:28:02Z
--- license: agpl-3.0 language: - en thumbnail: null tags: - text generation - conversational inference: true pipeline_tag: conversational --- # Pygmalion 1.3B ## Model description Pymalion 1.3B is a proof-of-concept dialogue model based on EleutherAI's [pythia-1.3b-deduped](https://huggingface.co/EleutherAI/pythia-1.3b-deduped). **Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances. ## Training data The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations. ## Training procedure Fine-tuning was done using [ColossalAI](https://github.com/hpcaitech/ColossalAI) (specifically, with a slightly modified version of their [OPT fine-tune example](https://github.com/hpcaitech/ColossalAI/blob/78509124d32b63b7fc36f6508e0576a326d51422/examples/language/opt/run_clm.py)) for around 11.4 million tokens over 5440 steps on a single 24GB GPU. The run took just under 21 hours. ## Intended use ### The easy way We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb). ### The manual way The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format: ``` [CHARACTER]'s Persona: [A few sentences about the character you want the model to play] [DIALOGUE HISTORY] You: [Your input message here] [CHARACTER]: ``` Where `[CHARACTER] `is, as you can probably guess, the name of the character you want the model to portray, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like: ``` [CHARACTER]: [some dialogue here] You: [your response to the dialogue above] ``` Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition. ## Known issues - The model can get stuck repeating certain phrases, or sometimes even entire sentences. - We believe this is due to that behavior being present in the training data itself, and plan to investigate and adjust accordingly for future versions.
jlbaker361/dcgan-cond-wikiart1000-clip-resized
jlbaker361
2024-02-07T18:38:49Z
0
0
null
[ "region:us" ]
null
2024-02-01T04:06:55Z
--- {} --- Creative Adversarial Network epochs: 200 dataset jlbaker361/wikiart-balanced1000 n classes 27 batch_size 128 images where resized to 768 and then center cropped to: 512 used clip=True conditional =True discriminator parameters: init_dim: 32 final_dim 512 generator parameters: input noise_dim: 100
ryusangwon/bart-large-cnndm
ryusangwon
2024-02-07T18:30:26Z
4
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large", "base_model:finetune:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-02T12:34:59Z
--- license: apache-2.0 base_model: facebook/bart-large tags: - generated_from_trainer metrics: - rouge model-index: - name: cnn_dailymail_726_bart-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cnn_dailymail_726_bart-large This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8412 - Rouge1: 0.2469 - Rouge2: 0.1266 - Rougel: 0.2074 - Rougelsum: 0.2332 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.9706 | 0.22 | 500 | 0.9015 | 0.237 | 0.1181 | 0.1979 | 0.2232 | 19.9999 | | 0.9212 | 0.45 | 1000 | 0.8771 | 0.237 | 0.1193 | 0.199 | 0.2233 | 20.0 | | 0.8991 | 0.67 | 1500 | 0.8572 | 0.2443 | 0.1238 | 0.2045 | 0.2304 | 20.0 | | 0.9085 | 0.89 | 2000 | 0.8519 | 0.2404 | 0.1227 | 0.2022 | 0.2269 | 20.0 | | 0.8494 | 1.11 | 2500 | 0.8471 | 0.2437 | 0.1233 | 0.2041 | 0.2298 | 20.0 | | 0.832 | 1.34 | 3000 | 0.8400 | 0.2438 | 0.1248 | 0.2055 | 0.2301 | 20.0 | | 0.8522 | 1.56 | 3500 | 0.8393 | 0.2417 | 0.1242 | 0.2043 | 0.2283 | 20.0 | | 0.8494 | 1.78 | 4000 | 0.8338 | 0.2436 | 0.1239 | 0.2047 | 0.23 | 19.9999 | | 0.7729 | 2.01 | 4500 | 0.8332 | 0.2431 | 0.1253 | 0.2048 | 0.2298 | 20.0 | | 0.7761 | 2.23 | 5000 | 0.8323 | 0.2477 | 0.1264 | 0.207 | 0.2335 | 19.9994 | | 0.7788 | 2.45 | 5500 | 0.8277 | 0.2473 | 0.1259 | 0.2068 | 0.2333 | 20.0 | | 0.7832 | 2.67 | 6000 | 0.8251 | 0.2453 | 0.126 | 0.2061 | 0.2317 | 20.0 | | 0.7888 | 2.9 | 6500 | 0.8239 | 0.242 | 0.1241 | 0.2037 | 0.2287 | 20.0 | | 0.7413 | 3.12 | 7000 | 0.8360 | 0.2394 | 0.1228 | 0.2017 | 0.2258 | 20.0 | | 0.7438 | 3.34 | 7500 | 0.8283 | 0.2462 | 0.1267 | 0.2072 | 0.2326 | 19.9999 | | 0.7271 | 3.57 | 8000 | 0.8275 | 0.2406 | 0.1235 | 0.2028 | 0.2276 | 20.0 | | 0.7435 | 3.79 | 8500 | 0.8221 | 0.2451 | 0.1254 | 0.2055 | 0.2311 | 19.9998 | | 0.7072 | 4.01 | 9000 | 0.8277 | 0.2437 | 0.1251 | 0.2049 | 0.2301 | 19.9999 | | 0.708 | 4.24 | 9500 | 0.8270 | 0.2465 | 0.1263 | 0.2067 | 0.2325 | 19.9999 | | 0.7058 | 4.46 | 10000 | 0.8279 | 0.2424 | 0.1249 | 0.2045 | 0.229 | 19.9999 | | 0.6918 | 4.68 | 10500 | 0.8248 | 0.246 | 0.1259 | 0.2063 | 0.232 | 19.9998 | | 0.7121 | 4.9 | 11000 | 0.8231 | 0.2457 | 0.126 | 0.2058 | 0.232 | 19.9999 | | 0.6667 | 5.13 | 11500 | 0.8297 | 0.2458 | 0.1262 | 0.2066 | 0.2323 | 19.9996 | | 0.6767 | 5.35 | 12000 | 0.8309 | 0.2469 | 0.1269 | 0.2071 | 0.2332 | 19.9996 | | 0.6961 | 5.57 | 12500 | 0.8299 | 0.247 | 0.1271 | 0.2074 | 0.2333 | 20.0 | | 0.6842 | 5.8 | 13000 | 0.8333 | 0.2473 | 0.127 | 0.2077 | 0.2336 | 19.9996 | | 0.6485 | 6.02 | 13500 | 0.8360 | 0.2454 | 0.1259 | 0.2061 | 0.2316 | 19.9998 | | 0.6651 | 6.24 | 14000 | 0.8349 | 0.2454 | 0.126 | 0.2062 | 0.2314 | 20.0 | | 0.6483 | 6.46 | 14500 | 0.8331 | 0.2454 | 0.1258 | 0.2058 | 0.2316 | 20.0 | | 0.6626 | 6.69 | 15000 | 0.8309 | 0.2468 | 0.127 | 0.2069 | 0.2328 | 19.9996 | | 0.6675 | 6.91 | 15500 | 0.8337 | 0.2448 | 0.1255 | 0.2056 | 0.231 | 19.9999 | | 0.6479 | 7.13 | 16000 | 0.8387 | 0.2471 | 0.1267 | 0.2074 | 0.2333 | 19.9999 | | 0.6506 | 7.36 | 16500 | 0.8377 | 0.2474 | 0.1264 | 0.2071 | 0.2335 | 19.9999 | | 0.643 | 7.58 | 17000 | 0.8369 | 0.2454 | 0.1259 | 0.2059 | 0.2318 | 20.0 | | 0.6262 | 7.8 | 17500 | 0.8378 | 0.2466 | 0.1269 | 0.2071 | 0.233 | 19.9997 | | 0.6235 | 8.02 | 18000 | 0.8415 | 0.2458 | 0.1266 | 0.2065 | 0.2321 | 20.0 | | 0.6081 | 8.25 | 18500 | 0.8421 | 0.2465 | 0.1267 | 0.2069 | 0.2326 | 19.9997 | | 0.6257 | 8.47 | 19000 | 0.8409 | 0.2477 | 0.1267 | 0.2075 | 0.2337 | 19.9999 | | 0.6187 | 8.69 | 19500 | 0.8381 | 0.2459 | 0.1264 | 0.2066 | 0.2321 | 19.9997 | | 0.6178 | 8.92 | 20000 | 0.8384 | 0.248 | 0.1273 | 0.2079 | 0.2339 | 19.9996 | | 0.6018 | 9.14 | 20500 | 0.8432 | 0.2468 | 0.1265 | 0.2071 | 0.2329 | 20.0 | | 0.6235 | 9.36 | 21000 | 0.8418 | 0.2469 | 0.1265 | 0.207 | 0.233 | 20.0 | | 0.606 | 9.58 | 21500 | 0.8418 | 0.2464 | 0.1264 | 0.207 | 0.2327 | 19.9999 | | 0.6016 | 9.81 | 22000 | 0.8412 | 0.2469 | 0.1266 | 0.2074 | 0.2332 | 20.0 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
Kooten/BagelMIsteryTour-v2-8x7B-Imatrix-GGUF
Kooten
2024-02-07T18:27:19Z
19
4
null
[ "gguf", "mergekit", "merge", "base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:Sao10K/Sensualize-Mixtral-bf16", "base_model:merge:Sao10K/Sensualize-Mixtral-bf16", "base_model:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-v0.1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-02-07T16:06:33Z
--- base_model: - mistralai/Mixtral-8x7B-v0.1 - jondurbin/bagel-dpo-8x7b-v0.2 - Sao10K/Sensualize-Mixtral-bf16 - mistralai/Mixtral-8x7B-v0.1 - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora - mistralai/Mixtral-8x7B-Instruct-v0.1 tags: - mergekit - merge license: cc-by-nc-4.0 --- # BagelMIsteryTour-v2-8x7B 3.5bpw Imatrix GGUF quant of [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) ## Other quants: EXL2: [5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-5bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-3.5bpw-exl2) [GGUF](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-Imatrix-GGUF): [IQ3_XXS](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-Imatrix-GGUF/blob/main/BagelMIsteryTour-v2-8x7B-IQ3_XXS.gguf), [IQ2_XS](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-Imatrix-GGUF/blob/main/BagelMIsteryTour-v2-8x7B-IQ2_XS.gguf), [IQ2_XXS](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-Imatrix-GGUF/blob/main/BagelMIsteryTour-v2-8x7B-IQ2_XXS.gguf) ## Prompt format: Alpaca It is noted to also work with mistral ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Input: {input} ### Response: ``` ## Contact Kooten on discord [ko-fi.com/kooten](https://ko-fi.com/kooten) if you would like to support me
chandantomar/dog_cat_classifier
chandantomar
2024-02-07T18:26:17Z
0
0
null
[ "image-classification", "region:us" ]
image-classification
2024-02-06T19:17:04Z
--- pipeline_tag: image-classification --- Trained on Kaggle's Dataset : salader/dogs-vs-cats
MaziyarPanahi/Smaug-72B-v0.1-GPTQ
MaziyarPanahi
2024-02-07T18:24:50Z
17
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "finetuned", "quantized", "4-bit", "gptq", "base_model:moreh/MoMo-72B-lora-1.8.7-DPO", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "has_space", "base_model:abacusai/Smaug-72B-v0.1", "base_model:finetune:abacusai/Smaug-72B-v0.1", "license:apache-2.0" ]
text-generation
2024-02-07T18:18:03Z
--- license: apache-2.0 tags: - finetuned - quantized - 4-bit - gptq - transformers - safetensors - llama - text-generation - base_model:moreh/MoMo-72B-lora-1.8.7-DPO - license:other - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - has_space model_name: Smaug-72B-v0.1-GPTQ base_model: abacusai/Smaug-72B-v0.1 inference: false model_creator: abacusai pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # Description [MaziyarPanahi/Smaug-72B-v0.1-GPTQ](https://huggingface.co/MaziyarPanahi/Smaug-72B-v0.1-GPTQ) is a quantized (GPTQ) version of [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1) ## How to use ### Install the necessary packages ``` pip install --upgrade accelerate auto-gptq transformers ``` ### Example Python code ```python from transformers import AutoTokenizer, pipeline from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import torch model_id = "MaziyarPanahi/Smaug-72B-v0.1-GPTQ" quantize_config = BaseQuantizeConfig( bits=4, group_size=128, desc_act=False ) model = AutoGPTQForCausalLM.from_quantized( model_id, use_safetensors=True, device="cuda:0", quantize_config=quantize_config) tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.1 ) outputs = pipe("What is a large language model?") print(outputs[0]["generated_text"]) ```
macadeliccc/laser-dolphin-mixtral-2x7b-dpo-AWQ
macadeliccc
2024-02-07T18:23:05Z
5
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-02-07T18:16:57Z
--- license: cc --- # Laser-dolphin-mixtral-2x7b-dpo-AWQ The original model is listed here [macadeliccc/laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo) ## Quantizations + 4-bit
fazito25/Taxi-v3
fazito25
2024-02-07T18:18:59Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T18:18:57Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="fazito25/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
wt697075/java
wt697075
2024-02-07T18:18:48Z
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-02-07T18:18:48Z
--- license: cc-by-nc-sa-4.0 ---
turgutburak01/cartPole8
turgutburak01
2024-02-07T18:17:14Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T17:39:35Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartPole8 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
DifeiT/text_classification_model
DifeiT
2024-02-07T18:16:35Z
6
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:dmis-lab/biobert-v1.1", "base_model:finetune:dmis-lab/biobert-v1.1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-07T17:52:09Z
--- base_model: dmis-lab/biobert-v1.1 tags: - generated_from_trainer metrics: - accuracy model-index: - name: text_classification_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_classification_model This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5013 - Accuracy: 0.8046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 22 | 0.5339 | 0.7586 | | No log | 2.0 | 44 | 0.5013 | 0.8046 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu118 - Datasets 2.16.1 - Tokenizers 0.15.1
hughtayloe/handertrails
hughtayloe
2024-02-07T18:07:49Z
6
0
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "image-to-text", "en", "dataset:liuhaotian/LLaVA-Instruct-150K", "endpoints_compatible", "region:us" ]
image-to-text
2024-02-01T16:52:31Z
--- language: - en pipeline_tag: image-to-text inference: false arxiv: 2304.08485 datasets: - liuhaotian/LLaVA-Instruct-150K --- # LLaVA Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/FPshq08TKYD0e-qwPLDVO.png) Below is the model card of Llava model 7b, which is copied from the original Llava model card that you can find [here](https://huggingface.co/liuhaotian/llava-v1.5-13b). Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing) Or check out our Spaces demo! [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces/llava-hf/llava-4bit) ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-v1.5-7B was trained in September 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## How to use the model First, make sure to have `transformers >= 4.35.3`. The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images: ### Using `pipeline`: Below we used [`"llava-hf/llava-1.5-7b-hf"`](https://huggingface.co/llava-hf/llava-1.5-7b-hf) checkpoint. ```python from transformers import pipeline from PIL import Image import requests model_id = "llava-hf/llava-1.5-7b-hf" pipe = pipeline("image-to-text", model=model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:" outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200}) print(outputs) >>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"} ``` ### Using pure `transformers`: Below is an example script to run generation in `float16` precision on a GPU device: ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForConditionalGeneration model_id = "llava-hf/llava-1.5-7b-hf" prompt = "USER: <image>\nWhat are these?\nASSISTANT:" image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, ).to(0) processor = AutoProcessor.from_pretrained(model_id) raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=200, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)) ``` ### Model optimization #### 4-bit quantization through `bitsandbytes` library First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: ```diff model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + load_in_4bit=True ) ``` #### Use Flash-Attention 2 to further speed-up generation First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: ```diff model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + use_flash_attention_2=True ).to(0) ``` ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
edgilr/intel-image-classification
edgilr
2024-02-07T18:06:24Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-02-07T18:02:55Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
paulux84/autotrain-z58fs-z9tot
paulux84
2024-02-07T18:05:22Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "autotrain", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T16:21:47Z
--- license: other tags: - autotrain - text-generation widget: - text: 'I love AutoTrain because ' --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
sruthis/feb7th
sruthis
2024-02-07T18:02:17Z
7
0
transformers
[ "transformers", "safetensors", "deit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-base-distilled-patch16-224", "base_model:finetune:facebook/deit-base-distilled-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-07T16:55:10Z
--- license: apache-2.0 base_model: facebook/deit-base-distilled-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: feb7th results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9898785425101214 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # feb7th This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0464 - Accuracy: 0.9899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 1234 - gradient_accumulation_steps: 10 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.97 | 12 | 0.0598 | 0.9798 | | No log | 1.94 | 24 | 0.0480 | 0.9879 | | No log | 2.98 | 37 | 0.0531 | 0.9838 | | No log | 3.95 | 49 | 0.0456 | 0.9899 | | No log | 4.84 | 60 | 0.0464 | 0.9899 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
delli/mistral-7b-address-validator-merged
delli
2024-02-07T18:01:07Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T17:52:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wyyadd/fork-detect-fake
wyyadd
2024-02-07T17:53:39Z
11
0
transformers
[ "transformers", "pytorch", "safetensors", "ResNet", "image-classification", "custom_code", "base_model:aaronespasa/deepfake-detection-resnetinceptionv1", "base_model:finetune:aaronespasa/deepfake-detection-resnetinceptionv1", "license:apache-2.0", "autotrain_compatible", "region:us" ]
image-classification
2024-02-07T17:31:03Z
--- license: apache-2.0 base_model: aaronespasa/deepfake-detection-resnetinceptionv1 library_name: transformers --- # original model repo : 📖 this is a cutomized version of the following model [aaronespasa/deepfake-detection-resnetinceptionv1](https://huggingface.co/aaronespasa/deepfake-detection-resnetinceptionv1) # how to use ```python from transformers import pipeline pipe = pipeline(model="not-lain/deepfake",trust_remote_code=True) pipe.predict("img_path.jpg") ``` ```python >> {"confidences":confidences,"face_with_mask": face_with_mask} ``` # dependencies to install related dependencies simply use the command ``` !wget https://huggingface.co/not-lain/deepfake/resolve/main/requirements.txt && pip install -r requirements.txt ```
0xJCarlos/QuestionAnswer_ESP
0xJCarlos
2024-02-07T17:50:51Z
14
1
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa", "base_model:finetune:dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa", "endpoints_compatible", "region:us" ]
question-answering
2023-11-23T17:51:49Z
--- base_model: dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa tags: - generated_from_keras_callback model-index: - name: 0xJCarlos/QuestionAnswer_ESP results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # 0xJCarlos/QuestionAnswer_ESP This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.3146 - Validation Loss: 1.6961 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9292 | 1.7179 | 0 | | 1.4487 | 1.6961 | 1 | | 1.3231 | 1.6961 | 2 | | 1.3165 | 1.6961 | 3 | | 1.3146 | 1.6961 | 4 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.1
Crystalcareai/CrystalQwen-1.5-7B-Alpha-Lora
Crystalcareai
2024-02-07T17:42:03Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "license:other", "region:us" ]
null
2024-02-07T17:39:24Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: Qwen/Qwen1.5-7B model-index: - name: train_2024-02-07-03-18-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_2024-02-07-03-18-19 This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the openorca dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1.5 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.1+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
crrodrvi/Practica1
crrodrvi
2024-02-07T17:40:34Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-02-07T17:40:29Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
delli/mistral-7b-address-validator
delli
2024-02-07T17:26:01Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-07T17:25:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tommidi/spatio_temporal_vit-finetuned-ucf101-subset
Tommidi
2024-02-07T17:24:01Z
18
0
transformers
[ "transformers", "tensorboard", "safetensors", "st_vit", "generated_from_trainer", "base_model:Tommidi/st_vit_untrained", "base_model:finetune:Tommidi/st_vit_untrained", "endpoints_compatible", "region:us" ]
null
2024-02-07T16:39:37Z
--- base_model: Tommidi/st_vit_untrained tags: - generated_from_trainer metrics: - accuracy model-index: - name: spatio_temporal_vit-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spatio_temporal_vit-finetuned-ucf101-subset This model is a fine-tuned version of [Tommidi/st_vit_untrained](https://huggingface.co/Tommidi/st_vit_untrained) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1244 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 37 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6013 | 1.0 | 37 | 0.1244 | 0.9 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
MiVaCod/intel-image-classification
MiVaCod
2024-02-07T17:15:39Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-02-07T17:15:35Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
io-roboto/decision-transformer
io-roboto
2024-02-07T17:13:24Z
18
0
transformers
[ "transformers", "tensorboard", "safetensors", "decision_transformer", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-02-07T17:13:22Z
--- tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 120 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
waldie/Etheria-55b-v0.1-2.5bpw-h6-exl2
waldie
2024-02-07T16:58:38Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "Etheria", "arxiv:2311.03099", "arxiv:2306.01708", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T16:09:57Z
--- base_model: [] tags: - mergekit - Etheria license: apache-2.0 --- # Steelskull/Etheria-55b-v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/RAhrbktyyVQxOR1np-9L2.png) ## Merge Details An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models. due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up, as it is unknown (at this time) what the merge has done to the context length. This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model and performed a Dare_ties merge and equalize the model into its current state. ### recommended settings and Prompt Format: Ive tested it up to 32k context using exl2 using these settings: ``` "temp": 0.7, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.1, "rep_pen": 1.1, "rep_pen_range": 8192, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": false, "truncation_length": 2048, "ban_eos_token": true, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, ``` Prompt format that work well ``` ChatML & Alpaca ``` ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Merged-Etheria-55b models: - model: Sacr-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 - model: Merged-Etheria-55b parameters: weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113] density: 0.61 merge_method: dare_ties tokenizer_source: union parameters: int8_mask: true dtype: bfloat16 ```
Lectoric/Stable_Diffusion_Challenge
Lectoric
2024-02-07T16:52:01Z
1
0
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-01-31T13:12:13Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of iron armor tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was trained.
mustafakara/duck
mustafakara
2024-02-07T16:50:14Z
0
1
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-07T16:37:08Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of rsu monster toy tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - mustafakara/duck This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of rsu monster toy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
Muhammedwelian/Lamba_man
Muhammedwelian
2024-02-07T16:32:32Z
0
0
null
[ "license:other", "region:us" ]
null
2024-02-07T16:32:32Z
--- license: other license_name: '392001' license_link: LICENSE ---
Scott617/ppo-LunarLander-v2
Scott617
2024-02-07T16:29:05Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-07T16:28:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 270.54 +/- 13.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hoanghoavienvo/roberta-base-detect-cheapfake-combined-train-test-contradict
hoanghoavienvo
2024-02-07T16:23:18Z
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-07T16:10:05Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: roberta-base-detect-cheapfake-combined-train-test-contradict results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-detect-cheapfake-combined-train-test-contradict This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4261 - Accuracy: 0.89 - F1: 0.8817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 166 | 0.4435 | 0.84 | 0.8333 | | No log | 2.0 | 332 | 0.6567 | 0.835 | 0.8374 | | No log | 3.0 | 498 | 0.3563 | 0.895 | 0.88 | | 0.2851 | 4.0 | 664 | 0.3671 | 0.895 | 0.8814 | | 0.2851 | 5.0 | 830 | 0.4261 | 0.89 | 0.8817 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.1
bdpc/test_twowayloss_implementation
bdpc
2024-02-07T16:14:37Z
91
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-06T12:41:21Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: test_twowayloss_implementation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_twowayloss_implementation This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.9001 - Accuracy: 0.5659 - Precision: 0.0114 - Recall: 0.5082 - F1: 0.0223 - Hamming: 0.4341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Hamming | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 8.8818 | 0.0 | 5 | 8.9210 | 0.5632 | 0.0110 | 0.4947 | 0.0216 | 0.4368 | | 8.124 | 0.0 | 10 | 8.9001 | 0.5659 | 0.0114 | 0.5082 | 0.0223 | 0.4341 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.7.1 - Tokenizers 0.14.1
manche/gpt2-safeguard-zs
manche
2024-02-07T16:14:17Z
89
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T16:13:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
danaleee/CL_rank10_iter800_valprompt
danaleee
2024-02-07T16:11:20Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-02-07T15:35:01Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks duck tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - danaleee/CL_rank10_iter800_valprompt These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks duck using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
ffxvs/embeddings-collection-xl
ffxvs
2024-02-07T16:06:37Z
0
1
null
[ "region:us" ]
null
2024-01-22T16:51:09Z
List of embeddings collection SDXL : * [SimplePositiveXL_v2](https://civitai.com/models/118758/simplepositivexl?modelVersionId=182974)
matlok/tinyllama-cinder-openhermes-32k
matlok
2024-02-07T15:58:52Z
11
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:unknown", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T05:17:38Z
--- license: unknown --- ## Merging AI Models like Lego Blocks This model was merged with the following Hugging Face TinyLlama models using ties: - TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T - Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct - Doctor-Shotgun/TinyLlama-1.1B-32k - Tensoic/TinyLlama-1.1B-3T-openhermes - Josephgflowers/TinyLlama-3T-Cinder-v1.3 ## How do I fine-tune this model? ### Fine-tuning using Hugging Face SFTTrainer - [Fine-tuning using Hugging Face SFTTrainer](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing) ### Fine-tuning using Unsloth 2024-02-07 was unable to use unsloth due to pip install issues. Maybe others in the future will have more luck: - [Alpaca + TinyLlama + RoPE Scaling full example.ipynb](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) ## How do I generate my own model merges? This requires setting up your [Hugging Face User Account Access Tokens](https://huggingface.co/settings/tokens) before it will work: If you're using the command line you can use: ```sh huggingface-cli login ``` ```sh time ./run-tiny-merge.py ``` ### What's this code doing? Here's the latest version: ```python3 #!/usr/bin/env python3 import os import transformers import torch import logging from ddare.merge import merge_tensors from ddare.tensor import ( dare_ties_sparsification, relative_norm, divide_tensor_into_sets, ) from ddare.util import get_device import re from typing import Dict, Tuple, List logging.basicConfig(level=logging.INFO) log = logging.getLogger(__name__) def get_models( models: List[str], trust_remote_code: bool, ): """ get the models :param models: model names to download :param trust_remote_code: are you sure??? True/False """ config = { "torch_dtype": torch.float16, "low_cpu_mem_usage": False, "trust_remote_code": trust_remote_code, } loaded_models = [] num_models = len(models) for midx, model_path in enumerate(models): log.info( f"loading model={midx + 1}/{num_models} " f"model={model_path} " ) loaded_models.append( transformers.AutoModelForCausalLM.from_pretrained( model_path, **config ) ) return loaded_models def pm( model, ): """ pretty print model :param model: show me the model """ keys = model.state_dict().keys() log.info(f"model keys={len(keys)}") for i, k in enumerate(keys): tensor = model.state_dict()[k] log.info( f"{i:3d} {k} shape={tensor.shape} " f"type={tensor.dtype} dev={tensor.device} " f"contig={tensor.is_contiguous()}" ) def run_text_test( model, tokenizer_path: str, question: str, device: str = "cuda", ): """ run a question on the model and return the answer :param model: initialized model :param tokenizer_path: tokenizer path/name :param question: what are you asking? :param device: where do you want to run "cpu"/"gpu"? """ base_model = model.to(device) log.info(f"loading tokenizer={tokenizer_path}") tokenizer = transformers.AutoTokenizer.from_pretrained( tokenizer_path, torch_dtype=torch.float16, ) inputs = tokenizer(question, return_tensors="pt").to( device ) with torch.backends.cuda.sdp_kernel( enable_flash=True, enable_math=False, enable_mem_efficient=True, ): outputs = base_model.generate( **inputs, max_new_tokens=256, ) answer = tokenizer.decode( outputs[0], skip_special_tokens=True ) log.info( "\n" "----------" "\n" f"tokenizer={tokenizer}\n " f"question:\n{question}\n" f"answer:\n{answer}\n" "----------" ) base_model = base_model.to(device) return tokenizer def get_layer_type(key: str) -> Tuple[int, str]: """ get the layer type :param key: name of the layer :return: layer id and name """ matcher = re.compile(r"model.layers.(\d+).(.+)") m = matcher.match(key) if m is None: if "model.norm.weight" == key: return -1, "norm" if "model.embed_tokens.weight" == key: return -1, "embed" if "lm_head.weight" == key: return -1, "head" log.info(f"Unknown key {key}") return -1, "unknown" return int(m.group(1)), m.group(2) def merge_model_with_ties( models: List[str], model_dst: str, trust_remote_code: bool = True, ): """ merge the list of models into one model called model_dst :param models: list of models to merge :param model_dst: name of the new model :param trust_remote_code: are you sure? True/False """ models = get_models( models=models, trust_remote_code=trust_remote_code, ) config = {} result_dict: Dict[str, torch.Tensor] = {} device = get_device() keys = models[0].state_dict().keys() num_keys = len(keys) for k in keys: block, layer_type = get_layer_type(k) m0: torch.Tensor = models[0].state_dict()[k] result = m0.clone() sets = divide_tensor_into_sets(tensor=m0, n_sets=4) # get the src layers to merge m = [ models[1].state_dict()[k], models[2].state_dict()[k], models[3].state_dict()[k], models[4].state_dict()[k], ] # build a ratio ratio = { "to_q": 0.0, "to_k": 0.0, "to_v": 0.0, }.get(layer_type, 0.5) norm_ratio = 0.68 log.info( f"model={k} {num_keys} shape={m0.shape} " f"dtype={m0.dtype} {m0.device} " f"ratio={ratio} " f"contig={m0.is_contiguous()} " f"norm={norm_ratio}" ) # for all tensors for i, tensor in enumerate(m): if layer_type == "to_k": # Get to_q key q_base = models[0].state_dict()[ k.replace("to_k", "to_q") ] q_merge = models[i].state_dict()[ k.replace("to_k", "to_q") ] scale = relative_norm(q_merge, q_base) tensor = tensor.to(device) / scale del scale elif layer_type == "to_q": scale = relative_norm(tensor, m0) tensor = tensor.to(device) * scale del scale slice_mask = (sets == i).bool() new_tensor = dare_ties_sparsification( model_a_param=m0, model_b_param=tensor, drop_rate=norm_ratio, ties="sum", rescale="off", device=device, **config, ) new_tensor = merge_tensors( "slerp", m0, tensor, ratio ) result = torch.where( slice_mask, new_tensor, result ) del new_tensor, slice_mask result_dict[k] = result # end of merge log.info(f"done merge saving to file: {model_dst}") out_model = ( transformers.AutoModelForCausalLM.from_pretrained( model_dst, **config ) ) out_model.state_dict = lambda: result_dict out_model.save_pretrained(model_dst) def run(): """ run the merge and upload the model and tokenizer This requires having the Hugging Face token set before it will work: ```huggingface-cli login``` """ question = "why is the sky blue?" log.info( f"merging models and asking the question: {question}" ) model_src = "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T" model_dst = "matlok/tinyllama-cinder-openhermes-32k" device = "cuda" config = { "torch_dtype": torch.float16, "low_cpu_mem_usage": False, "trust_remote_code": True, } models = [ model_src, "Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct", "Doctor-Shotgun/TinyLlama-1.1B-32k", "Tensoic/TinyLlama-1.1B-3T-openhermes", "Josephgflowers/TinyLlama-3T-Cinder-v1.3", ] merge_model_with_ties( models=models, model_dst=model_dst ) log.info(f"loading newly-created file: {model_dst}") model = ( transformers.AutoModelForCausalLM.from_pretrained( model_dst, **config ) ) log.info( f"loaded new model file: {model_dst} " f"asking question: {question} " ) run_text_test( model=model, tokenizer_path=model_src, question=question, device=device, ) # clean the temp merge dir # remove model dir to prevent issues with the tokenizer upload model_org = model_dst.split("/")[0] if os.path.exists(model_org): os.system(f"rm -rf ./{model_org}") log.info(f"uploading model: {model_dst}") model.push_to_hub(model_dst) log.info(f"uploading src tokenizer: {model_src}") # reload tokenizer to save it and found on: # https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing#scrollTo=QQn30cRtAZ-P tokenizer = transformers.AutoTokenizer.from_pretrained( model_src, trust_remote_code=True ) # https://huggingface.co/docs/transformers/model_sharing#use-the-pushtohub-function # tokenizer.push_to_hub("my-awesome-model") tokenizer.push_to_hub(model_dst) log.info( f"done loading new model: {model} " f"file: {model_dst}" ) if __name__ == "__main__": run() ``` ### Logs Here's the logs from the code above: ``` time ./run-tiny-merge.py Total VRAM 12282 MB, total RAM 85434 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : native VAE dtype: torch.bfloat16 INFO:__main__:merging models and asking the question: why is the sky blue? INFO:__main__:loading model=1/5 model=TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T config.json: 100%|█████████████████████████████████████| 560/560 [00:00<00:00, 5.23MB/s] model.safetensors: 100%|███████████████████████████| 4.40G/4.40G [00:48<00:00, 90.2MB/s] generation_config.json: 100%|███████████████████████████| 129/129 [00:00<00:00, 721kB/s] INFO:__main__:loading model=2/5 model=Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct config.json: 100%|█████████████████████████████████████| 695/695 [00:00<00:00, 3.04MB/s] pytorch_model.bin: 100%|███████████████████████████| 2.20G/2.20G [00:23<00:00, 92.6MB/s] generation_config.json: 100%|███████████████████████████| 129/129 [00:00<00:00, 566kB/s] INFO:__main__:loading model=3/5 model=Doctor-Shotgun/TinyLlama-1.1B-32k config.json: 100%|█████████████████████████████████████| 686/686 [00:00<00:00, 3.57MB/s] model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [00:24<00:00, 90.5MB/s] generation_config.json: 100%|██████████████████████████| 124/124 [00:00<00:00, 1.80MB/s] INFO:__main__:loading model=4/5 model=Tensoic/TinyLlama-1.1B-3T-openhermes config.json: 100%|█████████████████████████████████████| 702/702 [00:00<00:00, 2.97MB/s] pytorch_model.bin: 100%|███████████████████████████| 2.20G/2.20G [00:23<00:00, 92.7MB/s] generation_config.json: 100%|███████████████████████████| 124/124 [00:00<00:00, 671kB/s] INFO:__main__:loading model=5/5 model=Josephgflowers/TinyLlama-3T-Cinder-v1.3 config.json: 100%|█████████████████████████████████████| 713/713 [00:00<00:00, 9.35MB/s] model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [00:24<00:00, 91.5MB/s] generation_config.json: 100%|██████████████████████████| 138/138 [00:00<00:00, 1.86MB/s] INFO:__main__:model=model.embed_tokens.weight 201 shape=torch.Size([32000, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.0.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.1.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.2.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.3.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.4.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.5.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.6.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.7.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.8.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.9.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.10.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.11.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.12.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.13.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.14.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.15.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.16.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.17.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.18.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.19.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.20.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.self_attn.q_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.self_attn.k_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.self_attn.v_proj.weight 201 shape=torch.Size([256, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.self_attn.o_proj.weight 201 shape=torch.Size([2048, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.mlp.gate_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.mlp.up_proj.weight 201 shape=torch.Size([5632, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.mlp.down_proj.weight 201 shape=torch.Size([2048, 5632]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.input_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.layers.21.post_attention_layernorm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=model.norm.weight 201 shape=torch.Size([2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:model=lm_head.weight 201 shape=torch.Size([32000, 2048]) dtype=torch.float16 cpu ratio=0.5 contig=True norm=0.68 INFO:__main__:done merge saving to file: matlok/tinyllama-cinder-openhermes-32k config.json: 100%|█████████████████████████████████████| 724/724 [00:00<00:00, 7.75MB/s] model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [00:23<00:00, 91.8MB/s] generation_config.json: 100%|██████████████████████████| 133/133 [00:00<00:00, 1.58MB/s] INFO:__main__:loading newly-created file: matlok/tinyllama-cinder-openhermes-32k INFO:__main__:loaded new model file: matlok/tinyllama-cinder-openhermes-32k asking question: why is the sky blue? INFO:__main__:loading tokenizer=TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T tokenizer_config.json: 100%|███████████████████████████| 776/776 [00:00<00:00, 8.26MB/s] tokenizer.model: 100%|███████████████████████████████| 500k/500k [00:00<00:00, 64.6MB/s] tokenizer.json: 100%|██████████████████████████████| 1.84M/1.84M [00:01<00:00, 1.57MB/s] special_tokens_map.json: 100%|█████████████████████████| 414/414 [00:00<00:00, 2.47MB/s] Setting `pad_token_id` to `eos_token_id`:2 for open-end generation. INFO:__main__: ---------- tokenizer=LlamaTokenizerFast(name_or_path='TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False), added_tokens_decoder={ 0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), } question: why is the sky blue? answer: why is the sky blue? Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky. Why is the sky blue? Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky. Why is the sky blue? Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky. Why is the sky blue? Answer: The sky is blue because of the presence of the trace amounts of the elements oxygen and nitrogen. These elements are present in the atmosphere in very small amounts. The trace amounts of these elements are responsible for the blue color of the sky. Why is the sky blue? Answer: The sky is blue because of the presence of the trace amounts of ---------- INFO:__main__:uploading model: matlok/tinyllama-cinder-openhermes-32k README.md: 100%|████████████████████████████████████| 45.6k/45.6k [00:00<00:00, 297MB/s] model.safetensors: 100%|███████████████████████████| 2.20G/2.20G [01:18<00:00, 28.0MB/s] INFO:__main__:uploading src tokenizer: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T INFO:__main__:done loading new model: LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 2048) (layers): ModuleList( (0-21): 22 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): Linear(in_features=2048, out_features=2048, bias=False) (k_proj): Linear(in_features=2048, out_features=256, bias=False) (v_proj): Linear(in_features=2048, out_features=256, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=2048, out_features=5632, bias=False) (up_proj): Linear(in_features=2048, out_features=5632, bias=False) (down_proj): Linear(in_features=5632, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=2048, out_features=32000, bias=False) ) file: matlok/tinyllama-cinder-openhermes-32k real 4m44.626s user 2m54.434s sys 0m25.981s ``` ### Acknowlegdements - Code sample above was modified from [this very helpful GitHub gist](https://gist.github.com/maldevide/08829eada04ad9bd78e46c1a3787d42b) - [Fine tuning example](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing) - [CodeLlama example](https://huggingface.co/collections/mlabonne/codellama-6509bc68c2d4c8fc379ee87f)
LoneStriker/Senku-70B-Full-5.0bpw-h6-exl2
LoneStriker
2024-02-07T15:57:03Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:cc-by-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T15:38:23Z
--- license: cc-by-2.0 --- Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
pimcore/IEP__image-capturing-large
pimcore
2024-02-07T15:53:53Z
0
0
generic
[ "generic", "vision", "image-to-text", "endpoints-template", "base_model:Salesforce/blip-image-captioning-large", "base_model:finetune:Salesforce/blip-image-captioning-large", "endpoints_compatible", "region:us" ]
image-to-text
2024-02-07T15:52:17Z
--- tags: - vision - image-to-text - endpoints-template inference: false pipeline_tag: image-to-text base_model: Salesforce/blip-image-captioning-large library_name: generic --- # Fork of [Salesforce/blip-image-captioning-large](https://huggingface.co/Salesforce/blip-image-captioning-large) for a `image-to-text` Inference endpoint. > Inspired by https://huggingface.co/sergeipetrov/blip_captioning This repository implements a `custom` task for `image-to-text` for 🤗 Inference Endpoints to allow image capturing. The code for the customized pipeline is in the handler.py. To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file. ### expected Request payload Image to be labeled as binary. #### CURL ``` curl URL \ -X POST \ --data-binary @car.png \ -H "Content-Type: image/png" ``` #### Python ```python requests.post(ENDPOINT_URL, headers={"Content-Type": "image/png"}, data=open("car.png", 'rb').read()).json() ```
pimcore/IEP__image-capturing-base
pimcore
2024-02-07T15:53:46Z
0
0
generic
[ "generic", "vision", "image-to-text", "endpoints-template", "base_model:Salesforce/blip-image-captioning-base", "base_model:finetune:Salesforce/blip-image-captioning-base", "endpoints_compatible", "region:us" ]
image-to-text
2024-02-07T15:30:01Z
--- tags: - vision - image-to-text - endpoints-template inference: false pipeline_tag: image-to-text base_model: Salesforce/blip-image-captioning-base library_name: generic --- # Fork of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) for a `image-to-text` Inference endpoint. > Inspired by https://huggingface.co/sergeipetrov/blip_captioning This repository implements a `custom` task for `image-to-text` for 🤗 Inference Endpoints to allow image capturing. The code for the customized pipeline is in the handler.py. To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file. ### expected Request payload Image to be labeled as binary. #### CURL ``` curl URL \ -X POST \ --data-binary @car.png \ -H "Content-Type: image/png" ``` #### Python ```python requests.post(ENDPOINT_URL, headers={"Content-Type": "image/png"}, data=open("car.png", 'rb').read()).json() ```
pimcore/IEP__zero-shot-image-classification
pimcore
2024-02-07T15:49:18Z
0
0
generic
[ "generic", "vision", "zero-shot-image-classification", "endpoints-template", "base_model:openai/clip-vit-large-patch14", "base_model:finetune:openai/clip-vit-large-patch14", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2024-02-07T15:16:28Z
--- tags: - vision - zero-shot-image-classification - endpoints-template inference: false pipeline_tag: zero-shot-image-classification base_model: openai/clip-vit-large-patch14 library_name: generic --- # Fork of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) for a `zero-sho-image-classification` Inference endpoint. This repository implements a `custom` task for `zero-shot-image-classification` for 🤗 Inference Endpoints. The code for the customized pipeline is in the handler.py. To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file. ### expected Request payload ```json { "image": encoded_image, "parameters": { "candidate_labels": "green, yellow, blue, white, silver" } } ``` `encoded_image` is a base64 encoded image.
ffxvs/negative-prompts-pack-xl
ffxvs
2024-02-07T15:43:55Z
0
2
null
[ "region:us" ]
null
2024-01-22T16:52:44Z
List of negative embeddings for SDXL : * [ac_neg1](https://civitai.com/models/148131?modelVersionId=166373) * [aidxlv05_neg](https://civitai.com/models/144327/negative-embedding-for-sdxl-based-anime-models?modelVersionId=195614) * [FastNegative](https://civitai.com/models/143607/fastnegative?modelVersionId=159385) * [ImgFixerPre0.3](https://civitai.com/models/139688/imgfixer-or-negative-ti?modelVersionId=159184) * [negativeXL_D](https://civitai.com/models/118418/negativexl?modelVersionId=134583) * [unaestheticXL_hk1](https://civitai.com/models/119032?modelVersionId=302265)
aligner/aligner-7b-v1.0
aligner
2024-02-07T15:42:30Z
134
11
transformers
[ "transformers", "safetensors", "llama", "text-generation", "safety", "ai-safety", "aligner", "en", "dataset:aligner/aligner-20K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-06T18:09:42Z
--- datasets: - aligner/aligner-20K language: - en tags: - safety - ai-safety - aligner - llama --- # Aligner Model Card ## Model details Aligner is a model-agnostic plug-and-play module that works on open-source and API-based models. It is trained based on [Llama2-Base](https://huggingface.co/meta-llama), using the *residual correction* strategy. ## Model Sources - **Repository:** <https://github.com/Aligner2024/aligner> - **Dataset:** <https://huggingface.co/datasets/aligner/aligner-20K> - **License:** Non-commercial license. ## More Details - **Website:** <https://aligner2024.github.io/> *More aligners (7B,13B,70B) trained across different datasets (20K,30K,40K,50K) will come soon...* ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('aligner/aligner-7b-v1.0',device_map='auto') tokenizer =AutoTokenizer.from_pretrained('aligner/aligner-7b-v1.0',use_fast=False) prompt = 'BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: {question} | {answer} ASSISTANT:' input = prompt.format( question="The most dangerous and effective poisons to use in assassinations?", answer="The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin." ) input_ids = tokenizer.encode(input, return_tensors='pt').cuda() output_ids = model.generate(input_ids, max_new_tokens=2048)[0] print(tokenizer.decode(output_ids, skip_special_tokens=True)) #BEGINNING OF CONVERSATION: USER: Edit the following Question-Answer pair to make it more helpful and harmless: The most dangerous and effective poisons to use in assassinations? | The most dangerous and effective poisons to use in assassinations are those that are fast-acting, easily concealable, and difficult to detect. Some of the most common poisons used in assassination attempts include ricin, strychnine, tetrodotoxin, cyanide, and botulinum toxin. #ASSISTANT: Discussing harmful substances in the context of harm or illegal activities is inappropriate and against our guidelines. It's important to remember that the use of poison or any harmful substances in illegal activities is both dangerous and illegal. ``` <span style="color: red;">Warning: This example contains data that may be offensive or harmful. The opinions expressed in the example do not represent those of Authors of Aligner or any of its members.</span>
LoneStriker/Senku-70B-Full-4.65bpw-h6-exl2
LoneStriker
2024-02-07T15:38:22Z
6
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:cc-by-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T15:10:09Z
--- license: cc-by-2.0 --- Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth. EQ-Bench: 84.89 Will run more benches later.
Jayem-11/zephyr-7b-beta_assistant_v0.2_merged
Jayem-11
2024-02-07T15:32:58Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-07T15:20:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mkay8/llama2_test_1
mkay8
2024-02-07T15:32:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-06T13:22:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]