modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 00:41:25
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
549 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 00:41:08
card
stringlengths
11
1.01M
cutycat2000x/LoRA
cutycat2000x
2024-05-23T12:44:45Z
5
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:cutycat2000x/InterDiffusion-3.8", "base_model:adapter:cutycat2000x/InterDiffusion-3.8", "license:mit", "region:us" ]
text-to-image
2024-05-22T08:55:03Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- a smiling girl with sparkles in her eyes, walking in a garden, in the morning --style anime output: url: example1.png - text: >- firewatch landscape, Graphic Novel, Pastel Art, Poster, Golden Hour, Electric Colors, 4k, RGB, Geometric, Volumetric, Lumen Global Illumination, Ray Tracing Reflections, Twisted Rays, Glowing Edges, RTX --raw output: url: example2.png - text: >- Samsung Galaxy S9 output: url: example3.png - text: >- cat, 4k, 8k, hyperrealistic, realistic, High-resolution, unreal engine 5, rtx, 16k, taken on a sony camera, Cinematic, dramatic lighting output: url: example4.png - text: >- cinimatic closeup of burning skull output: url: example5.png - text: >- frozen elsa output: url: example6.png - text: >- A rainbow tree, anime style, tree in focus output: url: example7.png - text: >- A cat holding a sign that reads "Hello World" in cursive text output: url: example8.png - text: >- A birthday card for "Meow" output: url: example9.png base_model: cutycat2000x/InterDiffusion-3.8 instance_prompt: null license: mit --- # LoRA <Gallery /> ## Model description The Dall-E 3 style LoRA for InterDiffusion-3.8 ## Download model Weights for this model are available in Safetensors format. [Download](/cutycat2000x/LoRA/tree/main) them in the Files & versions tab.
Josephgflowers/TinyLlama-Cinder-Tiny-Agent
Josephgflowers
2024-05-23T12:44:31Z
150
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:Josephgflowers/TinyLlama-Cinder-Math-Train", "base_model:finetune:Josephgflowers/TinyLlama-Cinder-Math-Train", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T15:23:26Z
--- license: mit base_model: Josephgflowers/TinyLlama-Cinder-Math-Train tags: - generated_from_trainer model-index: - name: TinyLlama-Cinder-Tiny-Agent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyLlama-Cinder-Tiny-Agent This model is a fine-tuned version of [Josephgflowers/TinyLlama-Cinder-Math-Train](https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Math-Train) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Apel-sin/llama-3-8B-ortho-v2-exl2
Apel-sin
2024-05-23T12:44:02Z
0
0
null
[ "region:us" ]
null
2024-05-23T12:18:33Z
# Exllama v2 Llama-3-8B-Instruct-ortho-v2 Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model by <a href="https://huggingface.co/hjhj3168">hjhj3168</a><br> Calibration dataset: <a href="https://huggingface.co/datasets/cosmicvalor/toxic-qna">toxic-qna</a> ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/Apel-sin/llama-3-8B-ortho-v2-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/Apel-sin/llama-3-8B-ortho-v2-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
helenai/facebook-hubert-large-ls960-ft-ov
helenai
2024-05-23T12:35:34Z
4
0
transformers
[ "transformers", "openvino", "hubert", "automatic-speech-recognition", "en", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-01T15:52:17Z
--- language: - en tags: - openvino --- # facebook/hubert-large-ls960-ft This is the [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) model converted to [OpenVINO](https://openvino.ai), for accelerated inference. An example of how to do inference on this model: ```python from optimum.intel import OVModelForCTC from transformers import AutoProcessor, pipeline # model_id should be set to either a local directory or a model available on the HuggingFace hub. model_id = "helenai/facebook-hubert-large-ls960-ft-ov" feature_extractor = AutoProcessor.from_pretrained(model_id) model = OVModelForCTC.from_pretrained(model_id) pipe = pipeline("automatic-speech-recognition", model=model, feature_extractor=feature_extractor) result = pipe("hello world") print(result) ```
buming/q-Taxi-v3
buming
2024-05-23T12:30:58Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-23T12:30:56Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="buming/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
falan42/gemma-SODA_2b-Finetune
falan42
2024-05-23T12:27:44Z
6
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:20:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mike249/whisper-small-he-v4
mike249
2024-05-23T12:26:53Z
109
1
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T11:45:50Z
--- language: - he license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - ivrit-ai/whisper-training metrics: - wer model-index: - name: Whisper Small Hebrew 4 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: ivrit-ai/whisper-training type: ivrit-ai/whisper-training args: 'config: he, split: train' metrics: - name: Wer # Whisper Small Hebrew 4 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ivrit-ai/whisper-training dataset. This model achieves ~1.5% better results in terms of WER over the previous mike249/whisper-small-he-3. ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.19.1
lyy14011305/firefly-qwen-7b-sft-qlora
lyy14011305
2024-05-23T12:22:39Z
3
0
peft
[ "peft", "safetensors", "base_model:Qwen/Qwen-7B-Chat", "base_model:adapter:Qwen/Qwen-7B-Chat", "region:us" ]
null
2024-05-23T12:14:58Z
--- library_name: peft base_model: Qwen/Qwen-7B-Chat --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 - bnb_4bit_quant_storage: uint8 - load_in_4bit: True - load_in_8bit: False The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 - bnb_4bit_quant_storage: uint8 - load_in_4bit: True - load_in_8bit: False ### Framework versions - PEFT 0.10.0 - PEFT 0.4.0 - PEFT 0.4.0
nbeerbower/llama-3-bophades-v3-8B
nbeerbower
2024-05-23T12:21:07Z
179
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:kyujinpy/orca_math_dpo", "base_model:nbeerbower/llama-3-wissenschaft-8B", "base_model:finetune:nbeerbower/llama-3-wissenschaft-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-02T23:47:28Z
--- library_name: transformers base_model: - nbeerbower/llama-3-wissenschaft-8B datasets: - jondurbin/truthy-dpo-v0.1 - kyujinpy/orca_math_dpo license: other license_name: llama3 --- ![image/png](https://huggingface.co/nbeerbower/bophades-mistral-7B/resolve/main/bophades.png) # llama-3-bophades-v3-8B This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) [nbeerbower/llama-3-wissenschaft-8B](https://huggingface.co/nbeerbower/llama-3-wissenschaft-8B) finetuned on [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) and [kyujinpy/orca_math_dpo](https://huggingface.co/datasets/kyujinpy/orca_math_dpo). ### Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration Dataset preperation and message formatting: ```python def chatml_format(example): # Initialize formatted system message system = "" # Check if 'system' field exists and is not None if example.get('system'): system = "<|im_start|>system\n" + example['system'] + "<|im_end|>\n" # Format instruction instruction = "" if example.get('prompt'): instruction = example['prompt'] if example.get('question'): instruction = example['question'] prompt = "<|im_start|>user\n" + instruction + "<|im_end|>\n<|im_start|>assistant\n" # Format chosen answer chosen = example['chosen'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } # Array of datasets to concat ds = [ "jondurbin/truthy-dpo-v0.1", "kyujinpy/orca_math_dpo" ] # load_dataset and combine all loaded_datasets = [load_dataset(dataset_name, split='train') for dataset_name in ds] dataset = concatenate_datasets(loaded_datasets) # Save columns original_columns = dataset.column_names # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" # Format dataset dataset = dataset.map( chatml_format, remove_columns=original_columns ) ``` LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=2, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=1000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, max_prompt_length=2048, max_length=4096, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ```
saishf/SOVL-Mega-Mash-L3-8B
saishf
2024-05-23T12:21:00Z
115
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:saishf/Merge-Mayhem-L3-V2", "base_model:merge:saishf/Merge-Mayhem-L3-V2", "base_model:saishf/Merge-Mayhem-L3-V2.1", "base_model:merge:saishf/Merge-Mayhem-L3-V2.1", "base_model:saishf/Ortho-SOVL-8B-L3", "base_model:merge:saishf/Ortho-SOVL-8B-L3", "base_model:saishf/SOVLish-Maid-L3-8B", "base_model:merge:saishf/SOVLish-Maid-L3-8B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-12T12:19:18Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - saishf/SOVLish-Maid-L3-8B - saishf/Ortho-SOVL-8B-L3 - saishf/Merge-Mayhem-L3-V2 - saishf/Merge-Mayhem-L3-V2.1 model-index: - name: SOVL-Mega-Mash-L3-8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/SOVL-Mega-Mash-L3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 79.68 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/SOVL-Mega-Mash-L3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/SOVL-Mega-Mash-L3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.84 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/SOVL-Mega-Mash-L3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/SOVL-Mega-Mash-L3-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 67.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/SOVL-Mega-Mash-L3-8B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details This model is a merge of all of my SOVL models, in the hopes to create the most unhinged and wild model possible. ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [saishf/Ortho-SOVL-8B-L3](https://huggingface.co/saishf/Ortho-SOVL-8B-L3) as a base. ### Models Merged The following models were included in the merge: * [saishf/SOVLish-Maid-L3-8B](https://huggingface.co/saishf/SOVLish-Maid-L3-8B) * [saishf/Merge-Mayhem-L3-V2](https://huggingface.co/saishf/Merge-Mayhem-L3-V2) * [saishf/Merge-Mayhem-L3-V2.1](https://huggingface.co/saishf/Merge-Mayhem-L3-V2.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: saishf/Ortho-SOVL-8B-L3 - model: saishf/Merge-Mayhem-L3-V2 - model: saishf/Merge-Mayhem-L3-V2.1 - model: saishf/SOVLish-Maid-L3-8B merge_method: model_stock base_model: saishf/Ortho-SOVL-8B-L3 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__SOVL-Mega-Mash-L3-8B) | Metric |Value| |---------------------------------|----:| |Avg. |67.43| |AI2 Reasoning Challenge (25-Shot)|62.03| |HellaSwag (10-Shot) |79.68| |MMLU (5-Shot) |67.64| |TruthfulQA (0-shot) |51.84| |Winogrande (5-shot) |76.16| |GSM8k (5-shot) |67.25|
Alfiano07/tes_adamata_trashnet
Alfiano07
2024-05-23T12:20:58Z
0
0
null
[ "license:unknown", "region:us" ]
null
2024-05-20T11:20:08Z
--- license: unknown --- Trained with PyTorch using the EfficientNet B4 pretrained model & architecture.
Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle
Dampfinchen
2024-05-23T12:19:05Z
54
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Dampfinchen/Llama-3-8B-Ultra-Instruct", "base_model:merge:Dampfinchen/Llama-3-8B-Ultra-Instruct", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:merge:NousResearch/Meta-Llama-3-8B", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct", "license:llama3", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-12T21:49:54Z
--- license: llama3 library_name: transformers tags: - mergekit - merge base_model: - Dampfinchen/Llama-3-8B-Ultra-Instruct - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B-Instruct model-index: - name: Llama-3-8B-Ultra-Instruct-SaltSprinkle results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.35 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 77.76 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.82 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [Dampfinchen/Llama-3-8B-Ultra-Instruct](https://huggingface.co/Dampfinchen/Llama-3-8B-Ultra-Instruct) * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: density: 1 weight: 1 - model: Dampfinchen/Llama-3-8B-Ultra-Instruct parameters: density: 0.5 weight: 0.2 merge_method: dare_ties base_model: NousResearch/Meta-Llama-3-8B dtype: bfloat16 ``` Test of salt sprinkle methode. The goal is to retain all of L3 Instruct's capabilities while adding better RP, RAG, German and story writing capabilities in the form of Ultra Instruct. Model may generate harmful responses, I'm not responsible for what you do with this model. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Dampfinchen__Llama-3-8B-Ultra-Instruct-SaltSprinkle) | Metric |Value| |---------------------------------|----:| |Avg. |67.61| |AI2 Reasoning Challenge (25-Shot)|61.35| |HellaSwag (10-Shot) |77.76| |MMLU (5-Shot) |67.88| |TruthfulQA (0-shot) |52.82| |Winogrande (5-shot) |74.98| |GSM8k (5-shot) |70.89|
Likich/tinyllama-finetune-qualcoding_1000_prompt2
Likich
2024-05-23T12:17:34Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T12:17:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ananasa/lora_llama3_5epochs
ananasa
2024-05-23T12:14:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-22T16:17:33Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** ananasa - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mpachha/Meta-Llama-3-ft
mpachha
2024-05-23T12:10:47Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T12:10:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ramikan-BR/tinyllama-coder-py-4bit_LORA-v5
Ramikan-BR
2024-05-23T12:10:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "base_model:finetune:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T12:10:05Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
blackhole33/uzbek-speaker-verification-v8
blackhole33
2024-05-23T12:09:30Z
0
0
nemo
[ "nemo", "pytorch", "NeMo", "license:cc-by-4.0", "region:us" ]
null
2024-05-23T12:09:15Z
--- license: cc-by-4.0 library_name: nemo tags: - pytorch - NeMo --- # Uzbek-speaker-verification-v8 <style> img { display: inline; } </style> [![Model architecture](https://img.shields.io/badge/Model_Arch-PUT-YOUR-ARCHITECTURE-HERE-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-PUT-YOUR-MODEL-SIZE-HERE-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-PUT-YOUR-LANGUAGE-HERE-lightgrey#model-badge)](#datasets) **Put a short model description here.** See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html) for complete architecture details. ## NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['all'] ``` ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. ### Automatically instantiate the model **NOTE**: Please update the model class below to match the class of the model being uploaded. ```python import nemo.core import ModelPT model = ModelPT.from_pretrained("ai-nightcoder/uzbek-speaker-verification-v8") ``` ### NOTE Add some information about how to use the model here. An example is provided for ASR inference below. ### Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="ai-nightcoder/uzbek-speaker-verification-v8" audio_dir="" ``` ### Input **Add some information about what are the inputs to this model** ### Output **Add some information about what are the outputs of this model** ## Model Architecture **Add information here discussing architectural details of the model or any comments to users about the model.** ## Training **Add information here about how the model was trained. It should be as detailed as possible, potentially including the the link to the script used to train as well as the base config used to train the model. If extraneous scripts are used to prepare the components of the model, please include them here.** ### NOTE An example is provided below for ASR The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_transducer_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). ### Datasets **Try to provide as detailed a list of datasets as possible. If possible, provide links to the datasets on HF by adding it to the manifest section at the top of the README (marked by ---).** ### NOTE An example for the manifest section is provided below for ASR datasets datasets: - librispeech_asr - fisher_corpus - Switchboard-1 - WSJ-0 - WSJ-1 - National-Singapore-Corpus-Part-1 - National-Singapore-Corpus-Part-6 - vctk - voxpopuli - europarl - multilingual_librispeech - mozilla-foundation/common_voice_8_0 - MLCommons/peoples_speech The corresponding text in this section for those datasets is stated below - The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams. The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets: - Librispeech 960 hours of English speech - Fisher Corpus - Switchboard-1 Dataset - WSJ-0 and WSJ-1 - National Speech Corpus (Part 1, Part 6) - VCTK - VoxPopuli (EN) - Europarl-ASR (EN) - Multilingual Librispeech (MLS EN) - 2,000 hour subset - Mozilla Common Voice (v7.0) - People's Speech - 12,000 hour subset ## Performance **Add information here about the performance of the model. Discuss what is the metric that is being used to evaluate the model and if there are external links explaning the custom metric, please link to it. ### NOTE An example is provided below for ASR metrics list that can be added to the top of the README model-index: - name: PUT_MODEL_NAME results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: AMI (Meetings test) type: edinburghcstr/ami config: ihm split: test args: language: en metrics: - name: Test WER type: wer value: 17.10 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Earnings-22 type: revdotcom/earnings22 split: test args: language: en metrics: - name: Test WER type: wer value: 14.11 Provide any caveats about the results presented in the top of the discussion so that nuance is not lost. It should ideally be in a tabular format (you can use the following website to make your tables in markdown format - https://www.tablesgenerator.com/markdown_tables)** ## Limitations **Discuss any practical limitations to the model when being used in real world cases. They can also be legal disclaimers, or discussion regarding the safety of the model (particularly in the case of LLMs).** ### Note An example is provided below Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. ## License License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. ## References **Provide appropriate references in the markdown link format below. Please order them numerically.** [1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
Arbi-Houssem/speecht5_ar_tn_1.0
Arbi-Houssem
2024-05-23T12:04:26Z
82
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "ar", "dataset:Arbi-Houssem/datasetSTT-TTS", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-05-22T10:48:46Z
--- language: - ar license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - Arbi-Houssem/datasetSTT-TTS model-index: - name: SpeechT5 TTS Tunisie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Tunisie This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the datasetSTT-TTS dataset. It achieves the following results on the evaluation set: - Loss: 0.9733 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.4167 | 10 | 0.9837 | | No log | 0.8333 | 20 | 0.9733 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
keremberke/yolov8m-valorant-detection
keremberke
2024-05-23T12:02:03Z
3,039
10
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/valorant-object-detection", "license:agpl-3.0", "model-index", "region:us" ]
object-detection
2023-01-28T21:08:38Z
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/valorant-object-detection model-index: - name: keremberke/yolov8m-valorant-detection results: - task: type: object-detection dataset: type: keremberke/valorant-object-detection name: valorant-object-detection split: validation metrics: - type: precision value: 0.96466 name: mAP@0.5(box) license: agpl-3.0 --- <div align="center"> <img width="640" alt="keremberke/yolov8m-valorant-detection" src="https://huggingface.co/keremberke/yolov8m-valorant-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['dropped spike', 'enemy', 'planted spike', 'teammate'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8m-valorant-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
cosuleabianca/bert_qa
cosuleabianca
2024-05-23T12:01:30Z
135
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
question-answering
2024-05-23T12:00:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
keremberke/yolov8m-table-extraction
keremberke
2024-05-23T12:00:01Z
11,705
38
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/table-extraction", "license:agpl-3.0", "model-index", "region:us" ]
object-detection
2023-01-29T04:54:05Z
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/table-extraction model-index: - name: keremberke/yolov8m-table-extraction results: - task: type: object-detection dataset: type: keremberke/table-extraction name: table-extraction split: validation metrics: - type: precision value: 0.95194 name: mAP@0.5(box) license: agpl-3.0 --- <div align="center"> <img width="640" alt="keremberke/yolov8m-table-extraction" src="https://huggingface.co/keremberke/yolov8m-table-extraction/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['bordered', 'borderless'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8m-table-extraction') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
Llimy1/llama2-train-test
Llimy1
2024-05-23T11:57:52Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:37:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
interneuronai/advertisement_cap_on_banner_classification_bert
interneuronai
2024-05-23T11:56:05Z
109
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T11:55:23Z
--- {} --- ### Advertisement Cap on Banner Classification **Description:** Automatically classify and assign appropriate advertisement cap to banners to streamline manufacturing and delivery processes. ## How to Use Here is how to use this model to classify text into different categories: from transformers import AutoModelForSequenceClassification, AutoTokenizer model_name = "interneuronai/advertisement_cap_on_banner_classification_bert" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) def classify_text(text): inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512) outputs = model(**inputs) predictions = outputs.logits.argmax(-1) return predictions.item() text = "Your text here" print("Category:", classify_text(text))
Felladrin/Pythia-31M-Chat-v1
Felladrin
2024-05-23T11:54:13Z
2,016
6
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "conversational", "en", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:databricks/databricks-dolly-15k", "dataset:THUDM/webglm-qa", "dataset:starfishmedical/webGPT_x_dolly", "dataset:Amod/mental_health_counseling_conversations", "dataset:sablo/oasst2_curated", "dataset:cognitivecomputations/wizard_vicuna_70k_unfiltered", "dataset:mlabonne/chatml_dpo_pairs", "base_model:EleutherAI/pythia-31m", "base_model:finetune:EleutherAI/pythia-31m", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T12:44:07Z
--- language: - en license: apache-2.0 base_model: EleutherAI/pythia-31m datasets: - totally-not-an-llm/EverythingLM-data-V3 - databricks/databricks-dolly-15k - THUDM/webglm-qa - starfishmedical/webGPT_x_dolly - Amod/mental_health_counseling_conversations - sablo/oasst2_curated - cognitivecomputations/wizard_vicuna_70k_unfiltered - mlabonne/chatml_dpo_pairs pipeline_tag: text-generation widget: - messages: - role: system content: >- You are a career counselor. The user will provide you with an individual looking for guidance in their professional life, and your task is to assist them in determining what careers they are most suited for based on their skills, interests, and experience. You should also conduct research into the various options available, explain the job market trends in different industries, and advice on which qualifications would be beneficial for pursuing particular fields. - role: user content: Heya! - role: assistant content: Hi! How may I help you? - role: user content: >- I am interested in developing a career in software engineering. What would you recommend me to do? - messages: - role: system content: "You are a helpful assistant who answers user's questions with details and curiosity." - role: user content: What are some potential applications for quantum computing? - messages: - role: system content: You are a highly knowledgeable assistant. Help the user as much as you can. - role: user content: What are some steps I can take to become a healthier person? inference: parameters: max_new_tokens: 250 penalty_alpha: 0.5 top_k: 2 repetition_penalty: 1.0016 model-index: - name: Pythia-31M-Chat-v1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 22.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 25.6 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 23.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 47.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 0.0 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1 name: Open LLM Leaderboard --- # A Pythia Chat Model of 31M Parameters - Base model: [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) - Availability in other ML formats: - GGUF: [Felladrin/gguf-Pythia-31M-Chat-v1](https://huggingface.co/Felladrin/gguf-Pythia-31M-Chat-v1) - ONNX: [Felladrin/onnx-Pythia-31M-Chat-v1](https://huggingface.co/Felladrin/onnx-Pythia-31M-Chat-v1) ## Recommended prompt format ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant ``` ## Recommended inference parameters ```yml penalty_alpha: 0.5 top_k: 2 repetition_penalty: 1.0016 ``` ## Datasets and parameters used for training | Dataset | License Type | |---------|--------------| | [totally-not-an-llm/EverythingLM-data-V3](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) | mit | | [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | cc-by-sa-3.0 | | [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa) | apache-2.0 | | [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly) | cc-by-sa-3.0 | | [Amod/mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) | openrail | | [sablo/oasst2_curated](https://huggingface.co/datasets/sablo/oasst2_curated) | apache-2.0 | | [cognitivecomputations/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/cognitivecomputations/wizard_vicuna_70k_unfiltered) | apache-2.0 | | [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) | apache-2.0 | ```python SFTTrainer( model, train_dataset=train_dataset, dataset_text_field="text", eval_dataset=eval_dataset, max_seq_length=2048, packing=True, args=TrainingArguments( learning_rate=2e-6, per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=16, lr_scheduler_type="cosine", num_train_epochs=1, logging_strategy="steps", save_strategy="steps", evaluation_strategy="steps", logging_steps=10, eval_steps=10, save_steps=10, warmup_steps=50, load_best_model_at_end=True, metric_for_best_model="eval_loss", greater_is_better=False, weight_decay=0.01, save_total_limit=10, neftune_noise_alpha=5, ), callbacks=[ EarlyStoppingCallback( early_stopping_patience=3, early_stopping_threshold=0.005 ), ], ) ``` ```python DPOTrainer( model, beta=0.1, train_dataset=dataset, tokenizer=tokenizer, eval_dataset=eval_dataset, max_length=1536, max_prompt_length=1024, args=TrainingArguments( learning_rate=2e-6, per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=1, lr_scheduler_type="cosine", num_train_epochs=1, logging_strategy="steps", save_strategy="steps", evaluation_strategy="steps", logging_steps=1, eval_steps=1, save_steps=1, warmup_steps=0, load_best_model_at_end=True, metric_for_best_model="eval_loss", greater_is_better=False, weight_decay=0.0, neftune_noise_alpha=5, remove_unused_columns=False, ), callbacks=[ EarlyStoppingCallback( early_stopping_patience=3, early_stopping_threshold=0.005 ), ], ) ``` ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Pythia-31M-Chat-v1) | Metric |Value| |---------------------------------|----:| |Avg. |19.92| |AI2 Reasoning Challenge (25-Shot)|22.70| |HellaSwag (10-Shot) |25.60| |MMLU (5-Shot) |23.24| |TruthfulQA (0-shot) | 0.00| |Winogrande (5-shot) |47.99| |GSM8k (5-shot) | 0.00|
Felladrin/gguf-Pythia-31M-Chat-v1
Felladrin
2024-05-23T11:52:25Z
24
0
null
[ "gguf", "base_model:Felladrin/Pythia-31M-Chat-v1", "base_model:quantized:Felladrin/Pythia-31M-Chat-v1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-23T10:32:23Z
--- license: apache-2.0 base_model: Felladrin/Pythia-31M-Chat-v1 --- GGUF version of [Felladrin/Pythia-31M-Chat-v1](https://huggingface.co/Felladrin/Pythia-31M-Chat-v1).
hgnoi/9zRB5wzlWxF92kvY
hgnoi
2024-05-23T11:52:01Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:50:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ruslanmv/ai-medical-model-32bit
ruslanmv
2024-05-23T11:49:41Z
2,930
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "ruslanmv", "trl", "conversational", "en", "dataset:ruslanmv/ai-medical-dataset", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T17:09:50Z
--- language: en license: apache-2.0 tags: - text-generation-inference - transformers - ruslanmv - llama - trl base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - ruslanmv/ai-medical-dataset widget: - example_title: ai-medical-model-32bit messages: - role: system content: You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. - role: user content: What was the main cause of the inflammatory CD4+ T cells? output: text: Answer I'm happy to help! The main cause of inflammatory CD4+ T cells is a complex process that involves multiple factors. However, some of the key triggers include 1. Activation of CD4+ T cells CD4+ T cells are activated by antigens, cytokines, and other signals, leading to their proliferation and differentiation into effector cells. 2. Cytokine production Activated CD4+ T cells produce cytokines such as interleukin-2 (IL-2), interferon-gamma (IFN-γ), and tumor necrosis factor-alpha (TNF-α), which promote inflammation and immune responses. 3. Chemokine production CD4+ T cells also produce chemokines, such as CCL3, CCL4, and CCL5, which attract other immune cells to the site of inflammation. 4. Toll-like receptor (TLR) activation TLRs are pattern recognition receptors that recognize pathogen-associated molecular patterns (PAMPs) and activate CD4+ T cells. 5. Bacterial or viral infections Infections caused by bacteria, viruses, or fungi can trigger the activation of CD4+ T cells and the production of cytokines and chemokines model-index: - name: ai-medical-model-32bit results: [] --- # ai-medical-model-32bit: Fine-Tuned Llama3 for Technical Medical Questions [![](future.jpg)](https://ruslanmv.com/) This repository provides a fine-tuned version of the powerful Llama3 8B Instruct model, specifically designed to answer medical questions in an informative way. It leverages the rich knowledge contained in the AI Medical Dataset ([ruslanmv/ai-medical-dataset](https://huggingface.co/datasets/ruslanmv/ai-medical-dataset)). **Model & Development** - **Developed by:** ruslanmv - **License:** Apache-2.0 - **Finetuned from model:** meta-llama/Meta-Llama-3-8B-Instruct **Key Features** - **Medical Focus:** Optimized to address health-related inquiries. - **Knowledge Base:** Trained on a comprehensive medical dataset. - **Text Generation:** Generates informative and potentially helpful responses. **Installation** This model is accessible through the Hugging Face Transformers library. Install it using pip: ```bash !python -m pip install --upgrade pip !pip3 install torch==2.2.1 torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121 !pip install bitsandbytes accelerate ``` **Usage Example** Here's a Python code snippet demonstrating how to interact with the `ai-medical-model-32bit` model and generate answers to your medical questions: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig import torch model_name = "ruslanmv/ai-medical-model-32bit" device_map = 'auto' bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, ) model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, trust_remote_code=True, use_cache=False, device_map=device_map ) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token def askme(question): prompt = f"<|start_header_id|>system<|end_header_id|> You are a Medical AI chatbot assistant. <|eot_id|><|start_header_id|>User: <|end_header_id|>This is the question: {question}<|eot_id|>" # Tokenizing the input and generating the output #prompt = f"{question}" # Tokenizing the input and generating the output inputs = tokenizer([prompt], return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=256, use_cache=True) answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] # Try Remove the prompt try: # Split the answer at the first line break, assuming system intro and question are on separate lines answer_parts = answer.split("\n", 1) # If there are multiple parts, consider the second part as the answer if len(answer_parts) > 1: answers = answer_parts[1].strip() # Remove leading/trailing whitespaces else: answers = "" # If no split possible, set answer to empty string print(f"Answer: {answers}") except: print(answer) # Example usage # - Question: Make the question. question="What was the main cause of the inflammatory CD4+ T cells?" askme(question) ``` the type of answer is : ``` Answer: I'm happy to help! The main cause of inflammatory CD4+ T cells is a complex process that involves multiple factors. However, some of the key triggers include: 1. Activation of CD4+ T cells: CD4+ T cells are activated by antigens, cytokines, and other signals, leading to their proliferation and differentiation into effector cells. 2. Cytokine production: Activated CD4+ T cells produce cytokines such as interleukin-2 (IL-2), interferon-gamma (IFN-γ), and tumor necrosis factor-alpha (TNF-α), which promote inflammation and immune responses. 3. Chemokine production: CD4+ T cells also produce chemokines, such as CCL3, CCL4, and CCL5, which attract other immune cells to the site of inflammation. 4. Toll-like receptor (TLR) activation: TLRs are pattern recognition receptors that recognize pathogen-associated molecular patterns (PAMPs) and activate CD4+ T cells. 5. Bacterial or viral infections: Infections caused by bacteria, viruses, or fungi can trigger the activation of CD4+ T cells and the production of cytokines and chemokines ``` **Important Note** This model is intended for informational purposes only and should not be used as a substitute for professional medical advice. Always consult with a qualified healthcare provider for any medical concerns. **License** This model is distributed under the Apache License 2.0 (see LICENSE file for details). **Contributing** We welcome contributions to this repository! If you have improvements or suggestions, feel free to create a pull request. **Disclaimer** While we strive to provide informative responses, the accuracy of the model's outputs cannot be guaranteed. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ruslanmv__ai-medical-model-32bit) | Metric |Value| |---------------------------------|----:| |Avg. |67.67| |AI2 Reasoning Challenge (25-Shot)|61.43| |HellaSwag (10-Shot) |78.69| |MMLU (5-Shot) |68.10| |TruthfulQA (0-shot) |51.99| |Winogrande (5-shot) |75.77| |GSM8k (5-shot) |70.05|
nitus-ac/nMer3c2
nitus-ac
2024-05-23T11:48:28Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Undi95/Llama-3-LewdPlay-8B", "base_model:merge:Undi95/Llama-3-LewdPlay-8B", "base_model:ajibawa-2023/General-Stories-Mistral-7B", "base_model:merge:ajibawa-2023/General-Stories-Mistral-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:27:39Z
--- base_model: - ajibawa-2023/General-Stories-Mistral-7B - Undi95/Llama-3-LewdPlay-8B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [ajibawa-2023/General-Stories-Mistral-7B](https://huggingface.co/ajibawa-2023/General-Stories-Mistral-7B) * [Undi95/Llama-3-LewdPlay-8B](https://huggingface.co/Undi95/Llama-3-LewdPlay-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ajibawa-2023/General-Stories-Mistral-7B layer_range: [0, 32] - model: Undi95/Llama-3-LewdPlay-8B layer_range: [0, 32] merge_method: slerp base_model: Undi95/Llama-3-LewdPlay-8B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
lucianosb/boto-7B-v1.2-GGUF
lucianosb
2024-05-23T11:48:13Z
106
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "pt", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-23T04:03:51Z
--- language: - pt license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_Model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit --- # Boto 7B 1.2 - GGUF - Criador do Modelo: [Luciano Santa Brígida](https://lucianosb.com.br/) - Modelo Original: [Boto-7B v1.2](https://huggingface.co/lucianosb/boto-7B-v1.2) Boto-7B é um modelo de linguagem de 7 bilhões de parâmetros, otimizado a partir do Mistral-7B-intruct-v0.3. Confira os [presets](https://huggingface.co/lucianosb/boto-7B-GGUF/tree/main/presets) para usar com [LM Studio](https://lmstudio.ai/). ## Arquivos Incluídos | Nome | Método Quant | Bits | Tamanho | Desc | | ---- | ---- | ---- | ---- | ----- | | [boto-7B-v1.2-GGUF-unsloth.Q2_K.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q2_K.gguf) | q2_K | 2 | 2.72 GB | Quantização em 2-bit. Significativa perda de qualidade. Não-recomendado. | | [boto-7B-v1.2-GGUF-unsloth.Q3_K_M.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q3_K_M.gguf) | q3_K_M| 3 | 3.52 GB | Quantização em 3-bit. | | [boto-7B-v1.2-GGUF-unsloth.Q3_K_S.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q3_K_S.gguf) | q3_K_S | 3 | 3.17 GB | Quantização em 3-bit. | | [boto-7B-v1.2-GGUF-unsloth.Q4_0.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_0.gguf) | q4_0 | 4 | 4.11 GB | Quantização em 4-bit. Prefira usar o Q3_K_M| | [boto-7B-v1.2-GGUF-unsloth.Q4_K_S.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_K_S.gguf) | q4_K_S | 4 | 4.14 GB | Quantização em 4-bit. | | [boto-7B-v1.2-GGUF-unsloth.Q3_K_L.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q3_K_L.gguf) | q3_K_L | 3 | 3.83 GB | Quantização em 3-bit com menor perda de qualidade. | | [boto-7B-v1.2-GGUF-unsloth.Q4_K_M.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_K_M.gguf) | q4_K_M | 4 | 4.37 GB | Quantização em 4-bit. | | [boto-7B-v1.2-GGUF-unsloth.Q4_1.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_1.gguf) | q4_1 | 4 | 4.56 GB | Quantização em 4-bit. Acurácia maior que q4_0 mas não tão boa quanto q5_0. Inferência mais rápida que os modelos q5. | | [boto-7B-v1.2-GGUF-unsloth.Q5_0.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_0.gguf) | q5_0 | 5 | 5 GB | Quantização em 5-bit. Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [boto-7B-v1.2-GGUF-unsloth.Q5_1.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_1.gguf) | q5_1 | 5 | 5.45 GB | Quantização em 5-bit. Ainda Melhor acurácia, maior uso de recursos, inferência mais lenta. | | [boto-7B-v1.2-GGUF-unsloth.Q5_K_M.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_K_M.gguf) | q5_K_M | 5 | 5.14 GB | Quantização em 5-bit. Melhor performance. Recomendado. | | [boto-7B-v1.2-GGUF-unsloth.Q5_K_S.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_K_S.gguf) | q5_K_S | 5 | 5 GB | Quantização em 5-bit. | | [boto-7B-v1.2-GGUF-unsloth.Q6_K.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q6_K.gguf) | q6_K | 6 | 5.95 GB | Quantização em 6-bit. | | [boto-7B-v1.2-GGUF-unsloth.Q8_0.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q8_0.gguf) | q8_0 | 8 | 7.7 GB | Quantização em 8-bit. Quase indistinguível do float16. Usa muitos recursos e é mais lento. | **Observação**: os valores de RAM acima não pressupõem descarregamento de GPU. Se as camadas forem descarregadas para a GPU, isso reduzirá o uso de RAM e usará VRAM. ## Template ```` ### Instrução: {prompt} ### Resposta: ```` # Uploaded model - **Developed by:** lucianosb - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
grimjim/cuckoo-starling-32k-7B
grimjim
2024-05-23T11:47:42Z
11
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2310.05209", "base_model:grimjim/Mistral-Starling-merge-trial1-7B", "base_model:merge:grimjim/Mistral-Starling-merge-trial1-7B", "base_model:grimjim/kukulemon-7B", "base_model:merge:grimjim/kukulemon-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-16T01:30:41Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - grimjim/Mistral-Starling-merge-trial1-7B - grimjim/kukulemon-7B pipeline_tag: text-generation model-index: - name: cuckoo-starling-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.81 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/cuckoo-starling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.97 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/cuckoo-starling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/cuckoo-starling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.03 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/cuckoo-starling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/cuckoo-starling-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=grimjim/cuckoo-starling-7B name: Open LLM Leaderboard --- # cuckoo-starling-32k-7B For this merged model, rope theta was in config.json was manually adjusted down to 100K, a value less than 1M as initially released by Mistral for v0.2, but higher than the 10K that accompanied practical 8K context for v0.1. We idly conjecture that 1M rope theta might improve performance for needle-in-a-haystack queries; however, during informal testing, narrative coherence seemed to occasionally suffer under 1M rope theta. Furthermore, the results reported in the arXiv paper [Scaling Laws of RoPE-based Extrapolation](https://arxiv.org/abs/2310.05209) suggest that 1M rope theta may be overkill for a 32K token context window. Lightly tested with temperature 0.9-1.0 and minP 0.02, using ChatML prompts. The model natively supports Alpaca prompts. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). - Full weights: [grimjim/cuckoo-starling-32k-7B](https://huggingface.co/grimjim/cuckoo-starling-32k-7B/) - GGUFs: [grimjim/cuckoo-starling-32k-7B-GGUF](https://huggingface.co/grimjim/cuckoo-starling-32k-7B-GGUF/) ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [grimjim/Mistral-Starling-merge-trial1-7B](https://huggingface.co/grimjim/Mistral-Starling-merge-trial1-7B) * [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: grimjim/Mistral-Starling-merge-trial1-7B layer_range: [0, 32] - model: grimjim/kukulemon-7B layer_range: [0, 32] # or, the equivalent models: syntax: # models: merge_method: slerp base_model: grimjim/Mistral-Starling-merge-trial1-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_grimjim__cuckoo-starling-7B) | Metric |Value| |---------------------------------|----:| |Avg. |69.93| |AI2 Reasoning Challenge (25-Shot)|66.81| |HellaSwag (10-Shot) |85.97| |MMLU (5-Shot) |64.88| |TruthfulQA (0-shot) |59.03| |Winogrande (5-shot) |80.11| |GSM8k (5-shot) |62.77|
omniamnaeem/semantic_sim_ner_with_ContrastiveLossV5
omniamnaeem
2024-05-23T11:46:56Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-23T11:46:46Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 249872 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "__main__.LossEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.001 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 150, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
toiquangle1234/comment_classification
toiquangle1234
2024-05-23T11:42:58Z
116
1
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:vinai/phobert-base-v2", "base_model:finetune:vinai/phobert-base-v2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-17T15:46:28Z
--- base_model: vinai/phobert-base-v2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: comment_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # comment_classification This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4387 - Accuracy: 0.9228 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2831 | 1.0 | 18435 | 0.2946 | 0.9082 | | 0.2315 | 2.0 | 36870 | 0.3132 | 0.9182 | | 0.1888 | 3.0 | 55305 | 0.3513 | 0.9235 | | 0.1483 | 4.0 | 73740 | 0.3971 | 0.9209 | | 0.0997 | 5.0 | 92175 | 0.4387 | 0.9228 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Tokenizers 0.19.1
jun-han/my_transformer_model
jun-han
2024-05-23T11:36:32Z
46
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2024-05-23T11:36:25Z
# My Transformer Model This is a custom Transformer model trained for [your task]. # My Transformer Model This is a custom Transformer model trained for [your task]. ## Model Details - **Input Dimension:** 10000 - **Model Dimension:** 512 - **Number of Heads:** 8 - **Number of Layers:** 6 - **Output Dimension:** 2 ## Usage ```python from transformers import AutoModel import torch model = AutoModel.from_pretrained("jun-han/my_transformer_model")
wendlerc/fakepedia-one-hop-10steps
wendlerc
2024-05-23T11:36:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-15T10:40:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
harshilg-77/autoTrainMcKessonColab1
harshilg-77
2024-05-23T11:28:50Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:10:15Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
Mixry/RL-Investigation
Mixry
2024-05-23T11:25:01Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-05-23T11:24:53Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Mixry/RL-Investigation 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
fhswf/BPE_GPT2_TinyStoriesV2_cleaned_2048
fhswf
2024-05-23T11:25:00Z
0
1
null
[ "text generation", "en", "dataset:fhswf/TinyStoriesV2_cleaned", "license:mit", "region:us" ]
null
2024-05-22T12:16:53Z
--- license: mit language: - en tags: - text generation datasets: - fhswf/TinyStoriesV2_cleaned --- BPE Tokenizer for TinyStoriesV2 --- Based on get-neo BPE Tokenizer, but with a smaller vocabulary. Trained with TinyStoriesV2. - Vocab Size: 2048 - 256 Base chars - 1 extra Token: <|endoftext|> - 1791 merges
StefanMGreen/GenATCLlama.V2
StefanMGreen
2024-05-23T11:20:55Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T11:20:48Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** StefanMGreen - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
giantdev/qrBSJUkHkValOHym2
giantdev
2024-05-23T11:20:00Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:18:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BeaverAI/Cream-Phi-3-14B-v1c
BeaverAI
2024-05-23T11:18:39Z
0
1
null
[ "region:us" ]
null
2024-05-23T07:11:35Z
tldr; This is Phi 3 Medium finetuned for roleplaying. We needed more explicit moist. It failed. Training Details: - 8x H100 80GB SXM GPUs - 10 minutes training duration - A continued finetune of Cream-Phi-3-14B-v1b (now released as the official v1) Results for Roleplay Mode (i.e., not Instruct format): - Workable RP formatting with occassional mistakes. (Yep, it got worse) - Long-ish and moist response. It cooks fast. - Slightly incoherent. Can go hard on moist scenes but with poor spatial and anatomical understanding. - Important: My testing is lazy and flawed. Take it with a grain of salt and test the GGUFs before taking notes. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/BjN92w1x9XbsOj0RpALiz.png) (No eval split = no eval metrics ^) Axolotl Config (some fields omitted) ```yaml base_model: BeaverAI/Cream-Phi-3-14B-v1b load_in_4bit: true bf16: auto fp16: tf32: false flash_attention: true sequence_len: 6144 datasets: - path: SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed type: customphi3 num_epochs: 2 warmup_steps: 5 weight_decay: 0.1 adapter: lora lora_r: 32 lora_alpha: 16 lora_dropout: 0.1 lora_target_linear: true gradient_accumulation_steps: 2 micro_batch_size: 1 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true sample_packing: true pad_to_sequence_len: true optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.0001 max_grad_norm: 1.0 ```
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-875153
fine-tuned
2024-05-23T11:17:16Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-875153", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-23T11:16:46Z
--- license: apache-2.0 datasets: - fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-875153 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en**](https://huggingface.co/BAAI/bge-large-en) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-875153', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
Heimat24/vhs_burghausen_danielheinz_e5_v2-qa_generation_secretary-5-2-0.8
Heimat24
2024-05-23T11:16:02Z
8
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-23T11:15:26Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 81 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 16, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
cosuleabianca/bert_ner
cosuleabianca
2024-05-23T11:14:38Z
119
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T11:08:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF
mradermacher
2024-05-23T11:13:40Z
33
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "en", "dataset:Trelis/gawiki", "base_model:Trelis/Meta-Llama-3-8B-Instruct-Gaeilge", "base_model:quantized:Trelis/Meta-Llama-3-8B-Instruct-Gaeilge", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-23T10:46:06Z
--- base_model: Trelis/Meta-Llama-3-8B-Instruct-Gaeilge datasets: - Trelis/gawiki language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - facebook - meta - pytorch - llama - llama-3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Trelis/Meta-Llama-3-8B-Instruct-Gaeilge <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct-Gaeilge-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Gaeilge.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Fyodor11/Fyodor
Fyodor11
2024-05-23T11:07:28Z
0
0
null
[ "license:other", "region:us" ]
null
2024-05-22T18:05:37Z
--- license: other license_name: . license_link: LICENSE ---
hgnoi/YHDA3tMrfRhfX0Ks
hgnoi
2024-05-23T11:06:42Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:05:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ryan0712/llama-3-8b-slow-DUS-layer-SLERP
ryan0712
2024-05-23T11:06:36Z
145
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Meta-Llama-3-8B", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:finetune:NousResearch/Meta-Llama-3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T11:05:52Z
--- tags: - merge - mergekit - lazymergekit - NousResearch/Meta-Llama-3-8B base_model: - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B --- # llama-3-8b-slow-DUS-layer-SLERP llama-3-8b-slow-DUS-layer-SLERP is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [5, 6] - model: NousResearch/Meta-Llama-3-8B layer_range: [20, 21] merge_method: slerp base_model: NousResearch/Meta-Llama-3-8B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ryan0712/llama-3-8b-slow-DUS-layer-SLERP" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
lgk03/NDD-addressbook_test-tags
lgk03
2024-05-23T11:06:30Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-05T07:30:49Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: NDD-addressbook_test-tags results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NDD-addressbook_test-tags This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6587 - Accuracy: 0.7247 - F1: 0.6200 - Precision: 0.7021 - Recall: 0.7247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1633 | 1.0 | 694 | 0.5362 | 0.7247 | 0.6200 | 0.7021 | 0.7247 | | 0.1493 | 2.0 | 1388 | 0.6587 | 0.7247 | 0.6200 | 0.7021 | 0.7247 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
MaadTechnologies/MoonLander
MaadTechnologies
2024-05-23T11:05:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-23T10:56:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: MlpPolicy results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.00 +/- 45.57 name: mean_reward verified: false --- # **MlpPolicy** Agent playing **LunarLander-v2** This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
thomasabebe/mistralCrazy
thomasabebe
2024-05-23T10:59:22Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-23T10:30:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lyhourt/whisper-clean_4
lyhourt
2024-05-23T10:58:43Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:lyhourt/clean_4", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T08:54:19Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - lyhourt/clean_4 metrics: - wer model-index: - name: whisper-small-clean_4 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: lyhourt/clean_4 type: lyhourt/clean_4 metrics: - name: Wer type: wer value: 11.374407582938389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-clean_4 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the lyhourt/clean_4 dataset. It achieves the following results on the evaluation set: - Loss: 0.0801 - Wer: 11.3744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 300 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.0466 | 0.3333 | 100 | 0.1170 | 15.9953 | | 0.2315 | 1.0333 | 200 | 0.0852 | 12.6777 | | 0.0031 | 1.3667 | 300 | 0.0801 | 11.3744 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
kisejin/absa_step2
kisejin
2024-05-23T10:58:38Z
111
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-20T10:15:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Boadiwaa/LORA-colab-Distil-Whisper-medium
Boadiwaa
2024-05-23T10:56:07Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T10:56:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/Mistral-7B-v0.3-GGUF
QuantFactory
2024-05-23T10:55:08Z
260
1
transformers
[ "transformers", "gguf", "mistral", "text-generation", "base_model:mistralai/Mistral-7B-v0.3", "base_model:quantized:mistralai/Mistral-7B-v0.3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T06:32:19Z
--- license: apache-2.0 library_name: transformers pipeline_tag: text-generation tags: - mistral base_model: mistralai/Mistral-7B-v0.3 --- # Mistral-7B-v0.3-GGUF - This is quantized version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) created using llama.cpp # Model Description The Mistral-7B-v0.3 Large Language Model (LLM) is a Mistral-7B-v0.2 with extended vocabulary. Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2/edit/main/README.md) - Extended vocabulary to 32768 ## Installation It is recommended to use `mistralai/Mistral-7B-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` ### Demo After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment. ``` mistral-demo $HOME/mistral_models/7B-v0.3 ``` Should give something along the following lines: ``` This is a test of the emergency broadcast system. This is only a test. If this were a real emergency, you would be told what to do. This is a test ===================== This is another test of the new blogging software. I’m not sure if I’m going to keep it or not. I’m not sure if I’m going to keep ===================== This is a third test, mistral AI is very good at testing. 🙂 This is a third test, mistral AI is very good at testing. 🙂 This ===================== ``` ## Generate with `transformers` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mistral-7B-v0.3" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) inputs = tokenizer("Hello my name is", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
quangtqv/cross_encoder_tool_learning_beta_23_5
quangtqv
2024-05-23T10:49:11Z
107
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T10:47:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hamaadrafique/indoor_localization_classifier
hamaadrafique
2024-05-23T10:47:27Z
64
1
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-23T10:09:08Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: hamaadrafique/indoor_localization_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # hamaadrafique/indoor_localization_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.6696 - Validation Loss: 3.7110 - Train Accuracy: 0.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 3.6990 | 3.6910 | 0.0 | 0 | | 3.6738 | 3.6922 | 0.0 | 1 | | 3.6766 | 3.7100 | 0.0 | 2 | | 3.6836 | 3.7129 | 0.0 | 3 | | 3.6696 | 3.7110 | 0.0 | 4 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
tanganke/flan-t5-large_glue-rte_lora-16
tanganke
2024-05-23T10:46:56Z
27
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:google/flan-t5-large", "base_model:adapter:google/flan-t5-large", "region:us" ]
null
2024-05-23T10:46:45Z
--- library_name: peft base_model: google/flan-t5-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
hgnoi/CH3djeKtx1gjWCbQ
hgnoi
2024-05-23T10:44:09Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T10:42:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hjskhan/llama-3-math-finetuned-100
hjskhan
2024-05-23T10:38:35Z
83
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "math", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T07:53:42Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - math base_model: unsloth/llama-3-8b-bnb-4bit pipeline_tag: text-generation --- # Uploaded model - **Developed by:** hjskhan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit - **No. of iterations :** 500 - **Training loss:** 0.469800 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
chlee10/T3Q-Llama3-8B-dpo-v2.0
chlee10
2024-05-23T10:38:29Z
90
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-30T11:10:01Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Evaluation hf-causal-experimental (pretrained=chlee10/T3Q-Llama3-8B-dpo-v2.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8 | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.5150|± |0.0133| | | |macro_f1|0.3669|± |0.0090| |kobest_copa | 0|acc |0.6420|± |0.0152| | | |macro_f1|0.6417|± |0.0151| |kobest_hellaswag| 0|acc |0.4480|± |0.0223| | | |acc_norm|0.5720|± |0.0221| | | |macro_f1|0.4455|± |0.0223| |kobest_sentineg | 0|acc |0.6222|± |0.0244| | | |macro_f1|0.5820|± |0.0256|
Likich/gemma-finetune-qualcoding_1000_prompt2
Likich
2024-05-23T10:37:45Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T10:37:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tjasad/lora_fine_tuned_boolq_sloberta
tjasad
2024-05-23T10:35:50Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/sloberta", "base_model:adapter:EMBEDDIA/sloberta", "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-23T10:35:48Z
--- license: cc-by-sa-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/sloberta metrics: - accuracy - f1 model-index: - name: lora_fine_tuned_boolq_sloberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_fine_tuned_boolq_sloberta This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5648 - Accuracy: 0.7778 - F1: 0.6806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.6767 | 4.1667 | 50 | 0.5823 | 0.7778 | 0.6806 | | 0.6548 | 8.3333 | 100 | 0.5676 | 0.7778 | 0.6806 | | 0.654 | 12.5 | 150 | 0.5688 | 0.7778 | 0.6806 | | 0.6545 | 16.6667 | 200 | 0.5718 | 0.7778 | 0.6806 | | 0.6521 | 20.8333 | 250 | 0.5688 | 0.7778 | 0.6806 | | 0.6507 | 25.0 | 300 | 0.5654 | 0.7778 | 0.6806 | | 0.6494 | 29.1667 | 350 | 0.5658 | 0.7778 | 0.6806 | | 0.646 | 33.3333 | 400 | 0.5648 | 0.7778 | 0.6806 | ### Framework versions - PEFT 0.11.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
thliang01/cccorgy_dog_LoRA
thliang01
2024-05-23T10:34:36Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-23T10:28:05Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK dog widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - thliang01/cccorgy_dog_LoRA <Gallery /> ## Model description These are thliang01/cccorgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](thliang01/cccorgy_dog_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
dzungPaduahsgs/Mistral7B_xyz
dzungPaduahsgs
2024-05-23T10:29:58Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Viet-Mistral/Vistral-7B-Chat", "base_model:adapter:Viet-Mistral/Vistral-7B-Chat", "region:us" ]
null
2024-05-23T10:29:35Z
--- library_name: peft base_model: Viet-Mistral/Vistral-7B-Chat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Muhammad2003/TriMistral-7B-TIES
Muhammad2003
2024-05-23T10:29:15Z
186
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Hermes-2-Pro-Mistral-7B", "instructlab/merlinite-7b-lab", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:instructlab/merlinite-7b-lab", "base_model:merge:instructlab/merlinite-7b-lab", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T12:24:43Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - NousResearch/Hermes-2-Pro-Mistral-7B - instructlab/merlinite-7b-lab base_model: - NousResearch/Hermes-2-Pro-Mistral-7B - instructlab/merlinite-7b-lab model-index: - name: TriMistral-7B-TIES results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.8 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.47 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-TIES name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 60.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-TIES name: Open LLM Leaderboard --- # TriMistral-7B-TIES TriMistral-7B-TIES is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [instructlab/merlinite-7b-lab](https://huggingface.co/instructlab/merlinite-7b-lab) Special thanks to Charles Goddard for the quick implementation! ## 🧩 Configuration ```yaml models: - model: HuggingFaceH4/zephyr-7b-beta # no parameters necessary for base model - model: NousResearch/Hermes-2-Pro-Mistral-7B parameters: density: 0.5 weight: 0.5 - model: instructlab/merlinite-7b-lab parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: HuggingFaceH4/zephyr-7b-beta parameters: normalize: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Muhammad2003/TriMistral-7B-TIES" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 🏆 Evaluation # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Muhammad2003__TriMistral-7B-TIES) | Metric |Value| |---------------------------------|----:| |Avg. |67.68| |AI2 Reasoning Challenge (25-Shot)|64.85| |HellaSwag (10-Shot) |83.80| |MMLU (5-Shot) |63.45| |TruthfulQA (0-shot) |56.47| |Winogrande (5-shot) |76.64| |GSM8k (5-shot) |60.88|
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-814821
fine-tuned
2024-05-23T10:25:57Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-814821", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-23T10:25:45Z
--- license: apache-2.0 datasets: - fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-814821 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-base-zh-v1.5**](https://huggingface.co/BAAI/bge-base-zh-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-814821', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
922CA/Silicon-Natsuki-7b
922CA
2024-05-23T10:24:29Z
17
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:SanjiWatsuki/Silicon-Maid-7B", "base_model:finetune:SanjiWatsuki/Silicon-Maid-7B", "license:llama3", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-15T07:59:41Z
--- language: - en license: llama3 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: SanjiWatsuki/Silicon-Maid-7B model-index: - name: Silicon-Natsuki-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.19 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=922CA/Silicon-Natsuki-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.98 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=922CA/Silicon-Natsuki-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.88 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=922CA/Silicon-Natsuki-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 57.85 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=922CA/Silicon-Natsuki-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.69 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=922CA/Silicon-Natsuki-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 57.62 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=922CA/Silicon-Natsuki-7b name: Open LLM Leaderboard --- # Silicon-Monika-7b * Model fine-tuned for Natsuki character from DDLC per a request * Base: [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) (Mistral) * [GGUF](https://huggingface.co/922CA/Silicon-Natsuki-7b-gguf) ### USAGE For best results: replace "Human" and "Assistant" with "Player" and "Natsuki" like so: \nPlayer: (prompt)\nNatsuki: ### HYPERPARAMS * Trained for 1 epoch * rank: 32 * lora alpha: 32 * lora dropout: 0 * lr: 2e-4 * batch size: 2 * grad steps: 4 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ### WARNINGS AND DISCLAIMERS This model is meant to closely reflect the characteristics of Natsuki. Despite this, there is always the chance that "Natsuki" will hallucinate and get information about herself wrong or act out of character. Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_922CA__Silicon-Natsuki-7b) | Metric |Value| |---------------------------------|----:| |Avg. |67.70| |AI2 Reasoning Challenge (25-Shot)|65.19| |HellaSwag (10-Shot) |83.98| |MMLU (5-Shot) |62.88| |TruthfulQA (0-shot) |57.85| |Winogrande (5-shot) |78.69| |GSM8k (5-shot) |57.62|
tjasad/prompt_fine_tuned_boolq_sloberta
tjasad
2024-05-23T10:24:16Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:EMBEDDIA/sloberta", "base_model:adapter:EMBEDDIA/sloberta", "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-23T10:24:11Z
--- license: cc-by-sa-4.0 library_name: peft tags: - generated_from_trainer base_model: EMBEDDIA/sloberta metrics: - accuracy - f1 model-index: - name: prompt_fine_tuned_boolq_sloberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prompt_fine_tuned_boolq_sloberta This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5641 - Accuracy: 0.7778 - F1: 0.6806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | 0.6616 | 4.1667 | 50 | 0.5869 | 0.7778 | 0.6806 | | 0.6427 | 8.3333 | 100 | 0.5746 | 0.7778 | 0.6806 | | 0.6365 | 12.5 | 150 | 0.5704 | 0.7778 | 0.6806 | | 0.6407 | 16.6667 | 200 | 0.5675 | 0.7778 | 0.6806 | | 0.6352 | 20.8333 | 250 | 0.5658 | 0.7778 | 0.6806 | | 0.6386 | 25.0 | 300 | 0.5641 | 0.7778 | 0.6806 | | 0.6445 | 29.1667 | 350 | 0.5643 | 0.7778 | 0.6806 | | 0.6363 | 33.3333 | 400 | 0.5641 | 0.7778 | 0.6806 | ### Framework versions - PEFT 0.11.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
cm309/classification_bert_da_superior
cm309
2024-05-23T10:23:29Z
120
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:cm309/distilbert-base-uncased-finetuned-Financial-News-Superior", "base_model:finetune:cm309/distilbert-base-uncased-finetuned-Financial-News-Superior", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T10:23:20Z
--- license: apache-2.0 base_model: cm309/distilbert-base-uncased-finetuned-Financial-News-Superior tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: classification_bert_da_superior results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classification_bert_da_superior This model is a fine-tuned version of [cm309/distilbert-base-uncased-finetuned-Financial-News-Superior](https://huggingface.co/cm309/distilbert-base-uncased-finetuned-Financial-News-Superior) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8576 - Precision: 0.8576 - Recall: 0.8576 - F1: 0.8568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.5961 | 1.0 | 597 | 0.4288 | 0.8300 | 0.8336 | 0.8300 | 0.8312 | | 0.3723 | 2.0 | 1194 | 0.3962 | 0.8576 | 0.8576 | 0.8576 | 0.8568 | | 0.2497 | 3.0 | 1791 | 0.4695 | 0.8677 | 0.8668 | 0.8677 | 0.8665 | | 0.1816 | 4.0 | 2388 | 0.5746 | 0.8677 | 0.8677 | 0.8677 | 0.8676 | | 0.1281 | 5.0 | 2985 | 0.6205 | 0.8618 | 0.8610 | 0.8618 | 0.8612 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
cm309/classification_roberta_da_superior
cm309
2024-05-23T10:23:18Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:cm309/distilroberta-base-finetuned-Financial-News-Superior", "base_model:finetune:cm309/distilroberta-base-finetuned-Financial-News-Superior", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T10:22:13Z
--- license: apache-2.0 base_model: cm309/distilroberta-base-finetuned-Financial-News-Superior tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: classification_roberta_da_superior results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classification_roberta_da_superior This model is a fine-tuned version of [cm309/distilroberta-base-finetuned-Financial-News-Superior](https://huggingface.co/cm309/distilroberta-base-finetuned-Financial-News-Superior) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4049 - Accuracy: 0.8710 - Precision: 0.8768 - Recall: 0.8710 - F1: 0.8726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.3066 | 1.0 | 597 | 0.4151 | 0.8400 | 0.8661 | 0.8400 | 0.8461 | | 0.3089 | 2.0 | 1194 | 0.4049 | 0.8710 | 0.8768 | 0.8710 | 0.8726 | | 0.224 | 3.0 | 1791 | 0.4326 | 0.8836 | 0.8890 | 0.8836 | 0.8851 | | 0.1808 | 4.0 | 2388 | 0.5579 | 0.8853 | 0.8899 | 0.8853 | 0.8866 | | 0.1377 | 5.0 | 2985 | 0.5675 | 0.8953 | 0.8978 | 0.8953 | 0.8962 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Muhammad2003/TriMistral-7B-SLERP
Muhammad2003
2024-05-23T10:23:07Z
182
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-15T11:27:29Z
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge model-index: - name: TriMistral-7B-SLERP results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.25 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-SLERP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.47 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-SLERP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-SLERP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.57 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-SLERP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-SLERP name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 59.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Muhammad2003/TriMistral-7B-SLERP name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [instructlab/merlinite-7b-lab](https://huggingface.co/instructlab/merlinite-7b-lab) ### Configuration Since Slerp allows merging two models at a time, the following YAML configurations were used to produce this model: ```yaml slices: - sources: - model: HuggingFaceH4/zephyr-7b-beta layer_range: [0, 32] - model: NousResearch/Hermes-2-Pro-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: HuggingFaceH4/zephyr-7b-beta parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` Then ```yaml slices: - sources: - model: ./merge layer_range: [0, 32] - model: instructlab/merlinite-7b-lab layer_range: [0, 32] merge_method: slerp base_model: ./merge parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Muhammad2003__TriMistral-7B-SLERP) | Metric |Value| |---------------------------------|----:| |Avg. |67.76| |AI2 Reasoning Challenge (25-Shot)|64.25| |HellaSwag (10-Shot) |85.47| |MMLU (5-Shot) |64.89| |TruthfulQA (0-shot) |53.57| |Winogrande (5-shot) |79.16| |GSM8k (5-shot) |59.21|
TREFRE/hosteleria
TREFRE
2024-05-23T10:22:37Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-23T10:22:37Z
--- license: apache-2.0 ---
thirumarane/ppo-LunarLander-v2
thirumarane
2024-05-23T10:19:09Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-18T17:05:00Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 280.72 +/- 21.03 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
EleutherAI/Mistral-7B-v0.1-authors-random-standardized-random-names
EleutherAI
2024-05-23T10:19:05Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T00:32:03Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
win10/Meta-Llama-3-15B-Instruct
win10
2024-05-23T10:18:24Z
7
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pytorch", "llama-3", "mergekit", "merge", "conversational", "en", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T14:29:24Z
--- library_name: transformers language: - en pipeline_tag: text-generation tags: - pytorch - llama - llama-3 - mergekit - merge license: llama3 --- # llama3 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # embed_tokens comes along with the ride with whatever is the first layer layer_range: [0, 1] - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens layer_range: [0, 1] - sources: - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct layer_range: [1, 24] - sources: - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct layer_range: [8, 20] - sources: - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct layer_range: [18, 32] - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct layer_range: [18, 32] merge_method: passthrough dtype: bfloat16 ```
yaaserahr/phi3-finetune
yaaserahr
2024-05-23T10:18:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-23T10:17:14Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** yaaserahr - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Abhi5ingh/ControlnetDresscode
Abhi5ingh
2024-05-23T10:14:05Z
3
1
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "arxiv:2404.18591", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-10T20:27:53Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-Abhi5ingh/model_dresscode Have a design in your mind and would love to visualize it? Try my Fashion Generation model available as a playground to test on a hugging face space : https://huggingface.co/spaces/Abhi5ingh/fashionsd Research Paper: https://arxiv.org/abs/2404.18591 These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with a new type of conditioning on sketch and text. You can find the results and the validation inference below: Results: ![images 3](./firs.png) prompt: hem shoulder top in navy blue ![images_0)](./images_0.png) prompt: beautiful floral gown ![images_1)](./images_1.png) prompt: one-shoulder textured dress one long draping sleeve one sleeved mini purple evening dress ![images_2)](./images_2.png)
giantdev/88ZigmAJVEfWtwtm1
giantdev
2024-05-23T10:10:57Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T10:09:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jiangzeyinzi/SD15-TEXT_LORA-3DStyle-20240523-test
jiangzeyinzi
2024-05-23T10:07:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-23T10:06:57Z
--- frameworks: - Pytorch license: apache-2.0 tasks: - efficient-diffusion-tuning --- <p align="center"> <h2 align="center">SD15-TEXT_LORA-3DStyle-20240523-test</h2> <p align="center"> <br> <a href="https://github.com/modelscope/scepter/"><img src="https://img.shields.io/badge/powered by-scepter-6FEBB9.svg"></a> <br> </p> ## Model Introduction test123 ## Model Parameters <table> <thead> <tr> <th rowspan="2">Base Model</th> <th rowspan="2">Tuner Type</th> <th colspan="4">Training Parameters</th> </tr> <tr> <th>Batch Size</th> <th>Epochs</th> <th>Learning Rate</th> <th>Resolution</th> </tr> </thead> <tbody align="center"> <tr> <td rowspan="8">SD1.5</td> <td>TEXT_LORA</td> <td>4</td> <td>50</td> <td>0.0001</td> <td>[512, 512]</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Data Type</th> <th>Data Space</th> <th>Data Name</th> <th>Data Subset</th> </tr> </thead> <tbody align="center"> <tr> <td>Dataset zip</td> <td></td> <td>/home/scepter/cache/scepter_ui/datasets/scepter_txt2img_3D_example</td> <td>default</td> </tr> </tbody> </table> ## Model Performance Given the input "a boy wearing a jacket," the following image may be generated: ![image](./image.jpg) ## Model Usage ### Command Line Execution * Run using Scepter's SDK, taking care to use different configuration files in accordance with the different base models, as per the corresponding relationships shown below <table> <thead> <tr> <th rowspan="2">Base Model</th> <th rowspan="1">LORA</th> <th colspan="1">SCE</th> <th colspan="1">TEXT_LORA</th> <th colspan="1">TEXT_SCE</th> </tr> </thead> <tbody align="center"> <tr> <td rowspan="8">SD1.5</td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/examples/generation/stable_diffusion_1.5_512_lora.yaml">lora_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/scedit/t2i/sd15_512_sce_t2i_swift.yaml">sce_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/examples/generation/stable_diffusion_1.5_512_text_lora.yaml">text_lora_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/scedit/t2i/stable_diffusion_1.5_512_text_sce.yaml">text_sce_cfg</a></td> </tr> </tbody> <tbody align="center"> <tr> <td rowspan="8">SD2.1</td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/examples/generation/stable_diffusion_2.1_768_lora.yaml">lora_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/scedit/t2i/sd21_768_sce_t2i_swift.yaml">sce_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/examples/generation/stable_diffusion_2.1_768_text_lora.yaml">text_lora_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/scedit/t2i/sd21_768_text_sce_t2i_swift.yaml">text_sce_cfg</a></td> </tr> </tbody> <tbody align="center"> <tr> <td rowspan="8">SDXL</td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/examples/generation/stable_diffusion_xl_1024_lora.yaml">lora_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/scedit/t2i/sdxl_1024_sce_t2i_swift.yaml">sce_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/examples/generation/stable_diffusion_xl_1024_text_lora.yaml">text_lora_cfg</a></td> <td><a href="https://github.com/modelscope/scepter/blob/main/scepter/methods/scedit/t2i/sdxl_1024_text_sce_t2i_swift.yaml">text_sce_cfg</a></td> </tr> </tbody> </table> * Running from Source Code ```shell git clone https://github.com/modelscope/scepter.git cd scepter pip install -r requirements/recommended.txt PYTHONPATH=. python scepter/tools/run_inference.py --pretrained_model {this model folder} --cfg {lora_cfg} or {sce_cfg} or {text_lora_cfg} or {text_sce_cfg} --prompt 'a boy wearing a jacket' --save_folder 'inference' ``` * Running after Installing Scepter (Recommended) ```shell pip install scepter python -m scepter/tools/run_inference.py --pretrained_model {this model folder} --cfg {lora_cfg} or {sce_cfg} or {text_lora_cfg} or {text_sce_cfg} --prompt 'a boy wearing a jacket' --save_folder 'inference' ``` ### Running with Scepter Studio ```shell pip install scepter # Launch Scepter Studio python -m scepter.tools.webui ``` * Refer to the following guides for model usage. (video url) ## Model Reference If you wish to use this model for your own purposes, please cite it as follows. ```bibtex @misc{SD15-TEXT_LORA-3DStyle-20240523-test, title = {SD15-TEXT_LORA-3DStyle-20240523-test, https://huggingface.co/jiangzeyinzi/SD15-TEXT_LORA-3DStyle-20240523-test}, author = {jiangzeyinzi}, year = {2024} } ``` This model was trained using [Scepter Studio](https://github.com/modelscope/scepter); [Scepter](https://github.com/modelscope/scepter) is an algorithm framework and toolbox developed by the Alibaba Tongyi Wanxiang Team. It provides a suite of tools and models for image generation, editing, fine-tuning, data processing, and more. If you find our work beneficial for your research, please cite as follows. ```bibtex @misc{scepter, title = {SCEPTER, https://github.com/modelscope/scepter}, author = {SCEPTER}, year = {2023} } ```
Soukaina588956468/Kidney_Anatomy_Trained_Model
Soukaina588956468
2024-05-23T10:05:47Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:tiiuae/falcon-7b", "base_model:adapter:tiiuae/falcon-7b", "license:apache-2.0", "region:us" ]
null
2024-05-23T10:05:44Z
--- license: apache-2.0 base_model: tiiuae/falcon-7b tags: - generated_from_trainer model-index: - name: Kidney_Anatomy_Trained_Model results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Kidney_Anatomy_Trained_Model This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 - load_in_4bit: True - load_in_8bit: False ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - training_steps: 80 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.5.0 - Transformers 4.38.2 - Pytorch 2.3.0+cu121 - Datasets 2.13.1 - Tokenizers 0.15.2
jojo-ai-mst/rolema-7b-it
jojo-ai-mst
2024-05-23T10:01:29Z
0
2
transformers
[ "transformers", "safetensors", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-14T12:46:34Z
--- library_name: transformers license: mit language: - en --- # Rolema 7B Rolema 7B is a large language model that works effectively under a 4-bit quantization process. Rolema 7B is based on the backbone of the Gemma-7B model by Google. ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Min Si Thu - **Model type:** Text Generation Large Language Model - **Language(s) (NLP):** English - **License:** MIT ### How to use Installing Libraries ```bash %%capture %pip install -U bitsandbytes %pip install -U transformers %pip install -U peft %pip install -U accelerate %pip install -U trl %pip install -U datasets ``` Code Implementation ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from peft import PeftModel, PeftConfig base_model = "google/gemma-7b-it" adapter_model = "jojo-ai-mst/rolema-7b-it" # Load base model(Gemma 7B-it) bnbConfig = BitsAndBytesConfig( load_in_4bit = True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained(base_model,quantization_config=bnbConfig,) # device_map="auto" autosplit for cuda model = PeftModel.from_pretrained(model, adapter_model) tokenizer = AutoTokenizer.from_pretrained(base_model) model = model.to("cuda") inputs = tokenizer("How to learn programming", return_tensors="pt") inputs = inputs.to("cuda") outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=1000) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]) ```
hgnoi/OSn7XxRR9tPuuhgo
hgnoi
2024-05-23T09:58:51Z
131
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T09:57:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lucasvw/tinyllama-1.1B_alpaca_2k_lora
lucasvw
2024-05-23T09:52:48Z
2
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2024-05-23T09:37:32Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model-index: - name: tinyllama-1.1B_alpaca_2k_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml # Adapted from https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/tiny-llama/lora.yml base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: true load_in_4bit: false strict: false datasets: - path: mhenrichsen/alpaca_2k_test type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/lora-out hub_model_id: lucasvw/tinyllama-1.1B_alpaca_2k_lora wandb_project: tinyllama-1.1B_alpaca_2k_lora wandb_entity: lucasvw sequence_len: 4096 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ``` </details><br> # tinyllama-1.1B_alpaca_2k_lora This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2132 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4615 | 0.08 | 1 | 1.4899 | | 1.3851 | 0.24 | 3 | 1.4860 | | 1.3667 | 0.48 | 6 | 1.4396 | | 1.2684 | 0.72 | 9 | 1.3410 | | 1.2274 | 0.96 | 12 | 1.2938 | | 1.2519 | 1.16 | 15 | 1.2810 | | 1.2263 | 1.4 | 18 | 1.2534 | | 1.1355 | 1.6400 | 21 | 1.2357 | | 1.2697 | 1.88 | 24 | 1.2260 | | 1.1492 | 2.08 | 27 | 1.2217 | | 1.1531 | 2.32 | 30 | 1.2216 | | 1.1951 | 2.56 | 33 | 1.2184 | | 1.1118 | 2.8 | 36 | 1.2158 | | 1.1514 | 3.04 | 39 | 1.2127 | | 1.1893 | 3.24 | 42 | 1.2124 | | 1.1014 | 3.48 | 45 | 1.2115 | | 1.1892 | 3.7200 | 48 | 1.2132 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
Heimat24/vhs_burghausen_danielheinz_e5_v2-qa_generation_secretary-6-2-0.8
Heimat24
2024-05-23T09:52:17Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-05-23T09:51:42Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 97 with parameters: ``` {'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 19, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
OduguSusmitha/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo
OduguSusmitha
2024-05-23T09:51:56Z
3
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-23T06:51:24Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** OduguSusmitha - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ayilmaz/whisper-small-jsl-dictation-adapters
ayilmaz
2024-05-23T09:48:57Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T09:48:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
agutell/Llama-3-8B-wikihow
agutell
2024-05-23T09:48:11Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-05-23T07:32:08Z
--- license: llama3 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B model-index: - name: Llama-3-8B-wikihow results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3-8B-wikihow This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9372 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.1959 | 0.0323 | 20 | 2.0083 | | 2.0718 | 0.0646 | 40 | 1.9825 | | 2.0301 | 0.0970 | 60 | 1.9742 | | 2.0023 | 0.1293 | 80 | 1.9466 | | 1.9633 | 0.1616 | 100 | 1.9372 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
nbeerbower/llama-3-stinky-v2-8B
nbeerbower
2024-05-23T09:44:30Z
177
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS", "base_model:merge:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS", "base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", "base_model:merge:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct", "base_model:cloudyu/Meta-Llama-3-8B-Instruct-DPO", "base_model:merge:cloudyu/Meta-Llama-3-8B-Instruct-DPO", "base_model:elyn-dev/Llama-3-Soliloquy-8B-v2", "base_model:merge:elyn-dev/Llama-3-Soliloquy-8B-v2", "base_model:flammenai/Mahou-1.0-llama3-8B", "base_model:merge:flammenai/Mahou-1.0-llama3-8B", "base_model:flammenai/Mahou-1.1-llama3-8B", "base_model:merge:flammenai/Mahou-1.1-llama3-8B", "base_model:grimjim/llama-3-merge-pp-instruct-8B", "base_model:merge:grimjim/llama-3-merge-pp-instruct-8B", "base_model:grimjim/llama-3-merge-virt-req-8B", "base_model:merge:grimjim/llama-3-merge-virt-req-8B", "base_model:grimjim/llama-3-nvidia-ChatQA-1.5-8B", "base_model:merge:grimjim/llama-3-nvidia-ChatQA-1.5-8B", "base_model:jeiku/Orthocopter_8B", "base_model:merge:jeiku/Orthocopter_8B", "base_model:mlabonne/ChimeraLlama-3-8B-v2", "base_model:merge:mlabonne/ChimeraLlama-3-8B-v2", "base_model:nbeerbower/llama-3-stella-8B", "base_model:merge:nbeerbower/llama-3-stella-8B", "base_model:uygarkurt/llama-3-merged-linear", "base_model:merge:uygarkurt/llama-3-merged-linear", "license:other", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-11T20:37:00Z
--- license: other library_name: transformers tags: - mergekit - merge base_model: - mlabonne/ChimeraLlama-3-8B-v2 - grimjim/llama-3-merge-pp-instruct-8B - grimjim/llama-3-merge-virt-req-8B - uygarkurt/llama-3-merged-linear - jeiku/Orthocopter_8B - grimjim/llama-3-nvidia-ChatQA-1.5-8B - openlynn/Llama-3-Soliloquy-8B-v2 - VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct - nbeerbower/llama-3-stella-8B - cloudyu/Meta-Llama-3-8B-Instruct-DPO - NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS - flammenai/Mahou-1.0-llama3-8B - flammenai/Mahou-1.1-llama3-8B license_name: llama3 model-index: - name: llama-3-stinky-v2-8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-stinky-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.2 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-stinky-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 68.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-stinky-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.83 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-stinky-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-stinky-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.75 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-stinky-v2-8B name: Open LLM Leaderboard --- # llama-3-stinky-v2-8B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [flammenai/Mahou-1.1-llama3-8B](https://huggingface.co/flammenai/Mahou-1.1-llama3-8B) as a base. ### Models Merged The following models were included in the merge: * [mlabonne/ChimeraLlama-3-8B-v2](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v2) * [grimjim/llama-3-merge-pp-instruct-8B](https://huggingface.co/grimjim/llama-3-merge-pp-instruct-8B) * [grimjim/llama-3-merge-virt-req-8B](https://huggingface.co/grimjim/llama-3-merge-virt-req-8B) * [uygarkurt/llama-3-merged-linear](https://huggingface.co/uygarkurt/llama-3-merged-linear) * [jeiku/Orthocopter_8B](https://huggingface.co/jeiku/Orthocopter_8B) * [grimjim/llama-3-nvidia-ChatQA-1.5-8B](https://huggingface.co/grimjim/llama-3-nvidia-ChatQA-1.5-8B) * [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2) * [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) * [nbeerbower/llama-3-stella-8B](https://huggingface.co/nbeerbower/llama-3-stella-8B) * [cloudyu/Meta-Llama-3-8B-Instruct-DPO](https://huggingface.co/cloudyu/Meta-Llama-3-8B-Instruct-DPO) * [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) * [flammenai/Mahou-1.0-llama3-8B](https://huggingface.co/flammenai/Mahou-1.0-llama3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mlabonne/ChimeraLlama-3-8B-v2 - model: cloudyu/Meta-Llama-3-8B-Instruct-DPO - model: nbeerbower/llama-3-stella-8B - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct - model: uygarkurt/llama-3-merged-linear - model: openlynn/Llama-3-Soliloquy-8B-v2 - model: grimjim/llama-3-merge-pp-instruct-8B - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS - model: grimjim/llama-3-merge-virt-req-8B - model: jeiku/Orthocopter_8B - model: grimjim/llama-3-nvidia-ChatQA-1.5-8B - model: flammenai/Mahou-1.0-llama3-8B merge_method: model_stock base_model: flammenai/Mahou-1.1-llama3-8B dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__llama-3-stinky-v2-8B) | Metric |Value| |---------------------------------|----:| |Avg. |70.27| |AI2 Reasoning Challenge (25-Shot)|66.98| |HellaSwag (10-Shot) |83.20| |MMLU (5-Shot) |68.33| |TruthfulQA (0-shot) |55.83| |Winogrande (5-shot) |77.51| |GSM8k (5-shot) |69.75|
nbeerbower/llama-3-sauce-v2-8B
nbeerbower
2024-05-23T09:44:22Z
173
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "experimental", "conversational", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:flammenai/FlameMix-DPO-v1", "base_model:nbeerbower/llama-3-bophades-v1-8B", "base_model:finetune:nbeerbower/llama-3-bophades-v1-8B", "license:llama3", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-13T03:29:52Z
--- license: llama3 library_name: transformers tags: - experimental base_model: - nbeerbower/llama-3-bophades-v1-8B datasets: - jondurbin/gutenberg-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - flammenai/FlameMix-DPO-v1 model-index: - name: llama-3-sauce-v2-8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.11 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.39 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/llama-3-sauce-v2-8B name: Open LLM Leaderboard --- # llama-3-sauce-v2-8B This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) This is a bad finetune on nbeerbower/llama-3-spicy-abliterated-stella-8B using various DPO sets. # Chat Format Please use the ChatML format or you may experience poor results. ``` <|im_start|>system {System Prompt Here!}<|im_end|> <|im_start|>assistant {Message from AI}<|im_end|> <|im_start|>user {Message from User}<|im_end|> ``` # Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration Dataset preparation: ```python def chatml_format(example): # Format system system = "" if example.get('system') and len(example['system']) > 0: systemMessage = example['system'] system = "<|im_start|>system\n" + systemMessage + "<|im_end|>\n" # Format instruction prompt = "<|im_start|>user\n" + example['prompt'] + "<|im_end|>\n<|im_start|>assistant\n" # Format chosen answer chosen = example['chosen'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } # Array of datasets to concat ds = [ "jondurbin/truthy-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "flammenai/FlameMix-DPO-v1" ] # load_dataset and combine all loaded_datasets = [load_dataset(dataset_name, split='train') for dataset_name in ds] dataset = concatenate_datasets(loaded_datasets) # Save columns original_columns = dataset.column_names # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" # Format dataset dataset = dataset.map( chatml_format, remove_columns=original_columns ) ``` LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=1, gradient_accumulation_steps=1, gradient_checkpointing=True, learning_rate=3e-5, lr_scheduler_type="cosine", max_steps=4000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__llama-3-sauce-v2-8B) | Metric |Value| |---------------------------------|----:| |Avg. |70.38| |AI2 Reasoning Challenge (25-Shot)|65.61| |HellaSwag (10-Shot) |83.11| |MMLU (5-Shot) |67.98| |TruthfulQA (0-shot) |56.39| |Winogrande (5-shot) |76.72| |GSM8k (5-shot) |72.48|
malerbe/Car_racing_V0_V1
malerbe
2024-05-23T09:41:09Z
0
0
stable-baselines3
[ "stable-baselines3", "CarRacing-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-23T09:39:08Z
--- library_name: stable-baselines3 tags: - CarRacing-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CarRacing-v2 type: CarRacing-v2 metrics: - type: mean_reward value: -59.68 +/- 34.29 name: mean_reward verified: false --- # **PPO** Agent playing **CarRacing-v2** This is a trained model of a **PPO** agent playing **CarRacing-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
TwT-6/cr-model
TwT-6
2024-05-23T09:38:25Z
63
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-08T10:43:47Z
--- license: cc-by-nc-4.0 model-index: - name: cr-model results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 57.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TwT-6/cr-model name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 81.66 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TwT-6/cr-model name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 68.73 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TwT-6/cr-model name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 58.2 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TwT-6/cr-model name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TwT-6/cr-model name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TwT-6/cr-model name: Open LLM Leaderboard --- My model is a state-of-the-art language processing AI designed to understand and generate human-like text. It leverages deep learning algorithms to engage in a wide range of language tasks, providing users with information, recommendations, and even casual conversation. With a broad knowledge base and nuanced understanding of context, my capabilities enable me to assist with various inquiries and perform complex language-based tasks effectively. How to use? from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig import torch model = AutoModelForCausalLM.from_pretrained( 'TwT-6/cr-model', attn_implementation="flash_attention_2", trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="auto").eval() tokenizer = AutoTokenizer.from_pretrained('TwT-6/cr-model', trust_remote_code=True) inputs = '你好' inputs = f'<|omni_start|>### User:\n{inputs}\n\n### Assistant:\n' inputs = tokenizer(inputs, return_tensors="pt").to('cuda') output_ids = model.generate(**inputs)[0].cpu() output = tokenizer.decode(output_ids[inputs.input_ids.shape[-1]:]) print(output) ## 你好!很高兴见到你。有什么我可以帮助你的吗 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TwT-6__cr-model) | Metric |Value| |---------------------------------|----:| |Avg. |68.09| |AI2 Reasoning Challenge (25-Shot)|57.85| |HellaSwag (10-Shot) |81.66| |MMLU (5-Shot) |68.73| |TruthfulQA (0-shot) |58.20| |Winogrande (5-shot) |76.24| |GSM8k (5-shot) |65.88|
rayanebouta/model_unsloth_test
rayanebouta
2024-05-23T09:30:10Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T07:13:41Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
erland-ramadhan/fine_tuned-Llama-3-8B-MMLU_Subset-2
erland-ramadhan
2024-05-23T09:26:54Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-23T09:25:43Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HaikuEU/mixtral-fine-tuned-bin
HaikuEU
2024-05-23T09:23:05Z
2
0
transformers
[ "transformers", "pytorch", "mixtral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-23T09:17:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChristianSneffeFleischer/bert-finetuned-ner-football_final
ChristianSneffeFleischer
2024-05-23T09:22:44Z
164
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-23T08:55:11Z
--- license: apache-2.0 base_model: distilbert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner-football_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-football_final This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1280 - Precision: 0.8723 - Recall: 0.9143 - F1: 0.8928 - Accuracy: 0.9697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 199 | 0.1830 | 0.7612 | 0.8280 | 0.7932 | 0.9483 | | No log | 2.0 | 398 | 0.1424 | 0.8580 | 0.9103 | 0.8834 | 0.9643 | | 0.2566 | 3.0 | 597 | 0.1280 | 0.8723 | 0.9143 | 0.8928 | 0.9697 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.2 - Datasets 2.19.1 - Tokenizers 0.19.1
betteib/xlm-mlm-tn
betteib
2024-05-23T09:22:07Z
62
0
transformers
[ "transformers", "tf", "xlm-roberta", "fill-mask", "generated_from_keras_callback", "base_model:Davlan/xlm-roberta-base-finetuned-arabic", "base_model:finetune:Davlan/xlm-roberta-base-finetuned-arabic", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-23T09:14:02Z
--- license: mit base_model: Davlan/xlm-roberta-base-finetuned-arabic tags: - generated_from_keras_callback model-index: - name: betteib/xlm-mlm-tn results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # betteib/xlm-mlm-tn This model is a fine-tuned version of [Davlan/xlm-roberta-base-finetuned-arabic](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-arabic) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.1277 - Train Accuracy: 0.0048 - Validation Loss: 8.1367 - Validation Accuracy: 0.0046 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 118, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 6, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 8.7004 | 0.0046 | 8.1656 | 0.0046 | 0 | | 8.1277 | 0.0048 | 8.1367 | 0.0046 | 1 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.19.1 - Tokenizers 0.13.3
seollab/roberta-base-finetuned-emotion
seollab
2024-05-23T09:21:01Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T09:12:26Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: roberta-base-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9225 - name: F1 type: f1 value: 0.9230803565147492 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-emotion This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1975 - Accuracy: 0.9225 - F1: 0.9231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7202 | 1.0 | 250 | 0.2841 | 0.9045 | 0.9060 | | 0.2369 | 2.0 | 500 | 0.1975 | 0.9225 | 0.9231 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
andricValdez/bert-base-multilingual-cased-finetuned-autext24
andricValdez
2024-05-23T09:20:33Z
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-23T07:14:33Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-multilingual-cased-finetuned-autext24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-autext24 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3038 - Accuracy: 0.9495 - F1: 0.9493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 1200 | 0.1397 | 0.9470 | 0.9468 | | 0.1244 | 2.0 | 2400 | 0.2977 | 0.9219 | 0.9211 | | 0.1244 | 3.0 | 3600 | 0.1958 | 0.9503 | 0.9501 | | 0.0311 | 4.0 | 4800 | 0.2257 | 0.9545 | 0.9544 | | 0.0311 | 5.0 | 6000 | 0.3038 | 0.9495 | 0.9493 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Sirawipa/tian-ft
Sirawipa
2024-05-23T09:17:57Z
146
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:sail/Sailor-0.5B", "base_model:finetune:sail/Sailor-0.5B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T09:15:03Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: sail/Sailor-0.5B model-index: - name: tian-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tian-ft This model is a fine-tuned version of [sail/Sailor-0.5B](https://huggingface.co/sail/Sailor-0.5B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3696 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.8288 | 0.9362 | 11 | 5.0945 | | 2.0955 | 1.9574 | 23 | 0.5197 | | 0.3705 | 2.9787 | 35 | 0.3038 | | 0.1986 | 4.0 | 47 | 0.2816 | | 0.1402 | 4.9362 | 58 | 0.2941 | | 0.0884 | 5.9574 | 70 | 0.3380 | | 0.0636 | 6.9787 | 82 | 0.3373 | | 0.0477 | 8.0 | 94 | 0.3413 | | 0.0401 | 8.9362 | 105 | 0.3689 | | 0.0267 | 9.3617 | 110 | 0.3696 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.1.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
casque/white_sexy_see-through_lingerie
casque
2024-05-23T09:17:07Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-05-23T09:15:24Z
--- license: creativeml-openrail-m ---