modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Cicistawberry/y2k-v1
Cicistawberry
2024-06-21T04:37:47Z
4
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-06-21T04:00:18Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - diffusers-training - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'In the style of <s0><s1>, Coca-Cola' output: url: "image_0.png" - text: 'In the style of <s0><s1>, Coca-Cola' output: url: "image_1.png" - text: 'In the style of <s0><s1>, Coca-Cola' output: url: "image_2.png" - text: 'In the style of <s0><s1>, Coca-Cola' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A <s0><s1> ad license: openrail++ --- # SDXL LoRA DreamBooth - Cicistawberry/y2k-v1 <Gallery /> ## Model description ### These are Cicistawberry/y2k-v1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`y2k-v1.safetensors` here 💾](/Cicistawberry/y2k-v1/blob/main/y2k-v1.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:y2k-v1:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`y2k-v1_emb.safetensors` here 💾](/Cicistawberry/y2k-v1/blob/main/y2k-v1_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `y2k-v1_emb` to your prompt. For example, `A y2k-v1_emb ad` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Cicistawberry/y2k-v1', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='Cicistawberry/y2k-v1', filename='y2k-v1_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('In the style of <s0><s1>, Coca-Cola').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/Cicistawberry/y2k-v1/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
phunganhsang/xlm_70k_co_duoi
phunganhsang
2024-06-21T04:34:49Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T04:34:14Z
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm_70k_co_vn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm_70k_co_vn This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1934 - Accuracy: 0.9679 - F1: 0.9680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.2054 | 1.0 | 1719 | 0.1449 | 0.9595 | 0.9599 | | 0.1226 | 2.0 | 3438 | 0.1261 | 0.9638 | 0.9641 | | 0.1038 | 3.0 | 5157 | 0.1055 | 0.9682 | 0.9684 | | 0.0856 | 4.0 | 6876 | 0.1107 | 0.9676 | 0.9678 | | 0.0732 | 5.0 | 8595 | 0.1240 | 0.9680 | 0.9680 | | 0.0595 | 6.0 | 10314 | 0.1457 | 0.9679 | 0.9679 | | 0.0494 | 7.0 | 12033 | 0.1931 | 0.9667 | 0.9668 | | 0.0425 | 8.0 | 13752 | 0.1772 | 0.9672 | 0.9674 | | 0.0375 | 9.0 | 15471 | 0.1867 | 0.9683 | 0.9684 | | 0.034 | 10.0 | 17190 | 0.1934 | 0.9679 | 0.9680 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
damgomz/ft_1_11e6_base_x12
damgomz
2024-06-21T04:23:16Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:41:18Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 133936.00465154648 | | Emissions (Co2eq in kg) | 0.0810467465092037 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.581185752767985 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1395154321156447 | | Consumed energy (kWh) | 1.7207011848836269 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.257826808954227 | | Emissions (Co2eq in kg) | 0.05245826848852237 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_11e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.1e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.702339 | 0.615059 | | 1 | 0.340996 | 0.282906 | 0.910659 | | 2 | 0.247532 | 0.237807 | 0.924088 | | 3 | 0.198956 | 0.227085 | 0.931612 | | 4 | 0.160615 | 0.261149 | 0.938091 | | 5 | 0.132862 | 0.257193 | 0.925788 | | 6 | 0.107036 | 0.275526 | 0.926509 |
QuantFactory/Llama-3-8B-OpenHermes-2.5-1M-GGUF
QuantFactory
2024-06-21T04:17:15Z
18
0
null
[ "gguf", "axolotl", "generated_from_trainer", "text-generation", "base_model:Magpie-Align/Llama-3-8B-OpenHermes-2.5-1M", "base_model:quantized:Magpie-Align/Llama-3-8B-OpenHermes-2.5-1M", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-20T14:39:21Z
--- license: llama3 base_model: Magpie-Align/Llama-3-8B-OpenHermes-2.5-1M tags: - axolotl - generated_from_trainer model-index: - name: Llama-3-8B-OpenHermes-2.5-1M results: [] pipeline_tag: text-generation --- # QuantFactory/Llama-3-8B-OpenHermes-2.5-1M-GGUF This is quantized version of [Magpie-Align/Llama-3-8B-OpenHermes-2.5-1M](https://huggingface.co/Magpie-Align/Llama-3-8B-OpenHermes-2.5-1M) created using llama.cpp # Model Description [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: teknium/OpenHermes-2.5 type: sharegpt conversation: llama3 dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./out_Llama-8B-Openhermes-2.5 sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true wandb_project: SynDa wandb_entity: wandb_watch: wandb_name: Llama-3-8B-OpenHermes-2.5-1M wandb_log_model: hub_model_id: Magpie-Align/Llama-3-8B-OpenHermes-2.5-1M gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 3 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # Llama-3-8B-OpenHermes-2.5-1M This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6993 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9499 | 0.0007 | 1 | 0.9305 | | 0.6229 | 0.3337 | 488 | 0.7164 | | 0.6231 | 0.6674 | 976 | 0.7045 | | 0.5959 | 1.0011 | 1464 | 0.6959 | | 0.4997 | 1.3181 | 1952 | 0.7003 | | 0.529 | 1.6518 | 2440 | 0.6993 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Honyant/GPT1002M
Honyant
2024-06-21T04:16:40Z
66
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T03:52:05Z
--- license: apache-2.0 --- # GPT1002M Custom GPT Model with 1.002 Billion parameters, trained from scratch. Try sampling from the model with the inference API.
jhoppanne/Dogs-Breed-Image-Classification-V2
jhoppanne
2024-06-21T04:14:57Z
43
1
transformers
[ "transformers", "safetensors", "resnet", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/resnet-152", "base_model:finetune:microsoft/resnet-152", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-21T04:08:35Z
--- license: apache-2.0 base_model: microsoft/resnet-152 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Dogs-Breed-Image-Classification-V2 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8408163265306122 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Dogs-Breed-Image-Classification-V2 This model is a fine-tuned version of [microsoft/resnet-152](https://huggingface.co/microsoft/resnet-152) on the [Standford dogs dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset.). It achieves the following results on the evaluation set: - Loss: 1.0115 - Accuracy: 0.8408 ## Model description - [Link to the fine-tuned model using resnet-50](https://huggingface.co/jhoppanne/Dogs-Breed-Image-Classification-V0) - [Link to the fine-tuned model using resnet-101](https://huggingface.co/jhoppanne/Dogs-Breed-Image-Classification-V1) This model was trained using dataset from [Kaggle - Standford dogs dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset.) Quotes from the website: The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age. citation: Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex] Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex] ## Intended uses & limitations This model is fined tune solely for classifiying 120 species of dogs. ## Training and evaluation data 75% training data, 25% testing data. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 483 | 4.6525 | 0.7382 | | 4.7329 | 2.0 | 966 | 4.3558 | 0.7298 | | 4.5033 | 3.0 | 1449 | 3.9568 | 0.7471 | | 4.1405 | 4.0 | 1932 | 3.5160 | 0.7782 | | 3.7176 | 5.0 | 2415 | 3.0805 | 0.7946 | | 3.293 | 6.0 | 2898 | 2.6907 | 0.8021 | | 2.8898 | 7.0 | 3381 | 2.3044 | 0.8126 | | 2.5343 | 8.0 | 3864 | 2.0091 | 0.8177 | | 2.2188 | 9.0 | 4347 | 1.7910 | 0.8126 | | 1.9698 | 10.0 | 4830 | 1.6015 | 0.8194 | | 1.7532 | 11.0 | 5313 | 1.4383 | 0.8220 | | 1.586 | 12.0 | 5796 | 1.3355 | 0.8264 | | 1.4533 | 13.0 | 6279 | 1.2467 | 0.8260 | | 1.336 | 14.0 | 6762 | 1.1575 | 0.8313 | | 1.2641 | 15.0 | 7245 | 1.1038 | 0.8321 | | 1.185 | 16.0 | 7728 | 1.0606 | 0.8395 | | 1.1329 | 17.0 | 8211 | 1.0178 | 0.8398 | | 1.0977 | 18.0 | 8694 | 1.0115 | 0.8408 | | 1.0732 | 19.0 | 9177 | 0.9945 | 0.8381 | | 1.0508 | 20.0 | 9660 | 0.9930 | 0.8393 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.3.0 - Datasets 2.15.0 - Tokenizers 0.15.1
mradermacher/Bid_Master_AI_Merged_v2-GGUF
mradermacher
2024-06-21T04:14:43Z
5
0
transformers
[ "transformers", "gguf", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-21T03:48:24Z
--- base_model: MohamedAshour1993/Bid_Master_AI_Merged_v2 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/MohamedAshour1993/Bid_Master_AI_Merged_v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Bid_Master_AI_Merged_v2-GGUF/resolve/main/Bid_Master_AI_Merged_v2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Salmamoori/Salmamoori-vit-fine-tune-CIFAR-10
Salmamoori
2024-06-21T04:07:33Z
9
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-20T22:56:50Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer model-index: - name: Salmamoori-vit-fine-tune-CIFAR-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Salmamoori-vit-fine-tune-CIFAR-10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0429 | 1.0 | 6250 | 0.0990 | | 0.0025 | 2.0 | 12500 | 0.1146 | | 0.0002 | 3.0 | 18750 | 0.0811 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
janetsw/All
janetsw
2024-06-21T04:03:33Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-06-13T02:38:04Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - janetsw/All These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1-base. You can find some example images in the following.
avneshjuet/finetuning-sentiment-model-3000-samples
avneshjuet
2024-06-21T04:02:18Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T03:54:51Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3330 - Accuracy: 0.87 - F1: 0.8713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
damgomz/ft_1_18e6_base_x2
damgomz
2024-06-21T04:01:29Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:32:38Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 132613.20560336113 | | Emissions (Co2eq in kg) | 0.0802462327356631 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.5655684057258914 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1381370938529569 | | Consumed energy (kWh) | 1.7037054995788448 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2552804207864702 | | Emissions (Co2eq in kg) | 0.051940172194649775 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_18e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.8e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.696021 | 0.793166 | | 1 | 0.441239 | 0.386983 | 0.897424 | | 2 | 0.424628 | 0.432815 | 0.580151 | | 3 | 0.413239 | 0.381885 | 0.876594 | | 4 | 0.344091 | 0.377499 | 0.885580 | | 5 | 0.316042 | 0.378036 | 0.890475 | | 6 | 0.267633 | 0.321538 | 0.907539 |
chainup244/Qwen-Qwen1.5-0.5B-1718942175
chainup244
2024-06-21T03:56:50Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T03:56:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tom158/Llama3_8B-Inst_NutritionAI
Tom158
2024-06-21T03:56:05Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-21T03:42:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ANGKJ1995/distilbert-base-uncased-mbib-2048
ANGKJ1995
2024-06-21T03:41:43Z
7
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T03:41:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ECNU-CILab/ExVideo-SVD-128f-v1
ECNU-CILab
2024-06-21T03:41:35Z
0
58
null
[ "safetensors", "arxiv:2406.14130", "license:apache-2.0", "region:us" ]
null
2024-06-19T13:45:07Z
--- license: apache-2.0 --- # ExVideo ExVideo is a post-tuning technique aimed at enhancing the capability of video generation models. We have extended Stable Video Diffusion to achieve the generation of long videos up to 128 frames. This is the first model we have made public. Due to limitations in computational resources, this model was trained on about 40,000 videos using 8x A100 GPUs for approximately one week. Therefore, the model may sometimes generate content that does not conform to real-world principles. Please look forward to the release of our subsequent models. To use this model, please refer to [DiffSynth](https://github.com/modelscope/DiffSynth-Studio). * [Project Page](https://ecnu-cilab.github.io/ExVideoProjectPage/) * [Source Code](https://github.com/modelscope/DiffSynth-Studio) * [Technical report](https://arxiv.org/abs/2406.14130) ## Examples ||||| |-|-|-|-| |![](gifs/model_title_0.gif)|![](gifs/model_title_1.gif)|![](gifs/model_title_2.gif)|![](gifs/model_title_3.gif)| |![](gifs/model_title_4.gif)|![](gifs/model_title_5.gif)|![](gifs/model_title_6.gif)|![](gifs/model_title_7.gif)| |![](gifs/model_title_8.gif)|![](gifs/model_title_9.gif)|![](gifs/model_title_10.gif)|![](gifs/model_title_11.gif)|
ltaler01/Llama-2-13b-chat-hf
ltaler01
2024-06-21T03:38:59Z
5
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "conversational", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T23:22:43Z
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: >- ### LLAMA 2 COMMUNITY LICENSE AGREEMENT "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 2 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com) extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 license: llama2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)| |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
richardkelly/Qwen-Qwen1.5-1.8B-1718940448
richardkelly
2024-06-21T03:27:33Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-21T03:27:28Z
--- library_name: peft base_model: Qwen/Qwen1.5-1.8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
richardkelly/Qwen-Qwen1.5-0.5B-1718940391
richardkelly
2024-06-21T03:26:40Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
2024-06-21T03:26:32Z
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
DRAGOO/bert_210
DRAGOO
2024-06-21T03:18:24Z
7
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T02:59:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kevin009/llama44
kevin009
2024-06-21T02:59:07Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T02:52:46Z
--- library_name: transformers tags: - unsloth - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
allistair99/distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-optiparam
allistair99
2024-06-21T02:54:05Z
5
0
transformers
[ "transformers", "safetensors", "distilbert", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased-distilled-squad", "base_model:finetune:distilbert/distilbert-base-uncased-distilled-squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-21T02:53:30Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased-distilled-squad tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-optiparam results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-squad-BiLSTM-finetuned-squad-optiparam This model is a fine-tuned version of [distilbert/distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 6 - eval_batch_size: 60 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.6652 | 1.0 | 14619 | 1.0816 | | 0.6453 | 2.0 | 29238 | 1.0866 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
siddharth-magesh/Tiny_Lllama-AgriDB
siddharth-magesh
2024-06-21T02:53:22Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T02:44:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kevinkawchak/gradientai-Llama-3-8B-Instruct-Gradient-1048k-16Molecule
kevinkawchak
2024-06-21T02:53:15Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:gradientai/Llama-3-8B-Instruct-Gradient-1048k", "base_model:finetune:gradientai/Llama-3-8B-Instruct-Gradient-1048k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-17T20:12:47Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: gradientai/Llama-3-8B-Instruct-Gradient-1048k --- # Uploaded model - **Developed by:** kevinkawchak - **License:** apache-2.0 - **Finetuned from model :** gradientai/Llama-3-8B-Instruct-Gradient-1048k - **Finetuned dataset:** zjunlp/Mol-Instructions/Molecule-oriented Instructions/description_guided_molecule_design This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
pranesh16/Tiny-Llama-AgriBot
pranesh16
2024-06-21T02:49:56Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T02:40:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JuanMa360/Phi-3-mini-128k-dd360-finetune
JuanMa360
2024-06-21T02:48:48Z
9
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T02:46:46Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
spow12/ChatWaifu_v1.0
spow12
2024-06-21T02:24:34Z
29
7
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "nsfw", "Visual novel", "roleplay", "conversational", "ja", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-13T05:53:55Z
--- library_name: transformers tags: - nsfw - Visual novel - roleplay license: cc-by-nc-4.0 language: - ja pipeline_tag: text-generation --- # Model Card for Model ID ![image](./cover.png) <!-- Generated using cagliostrolab/animagine-xl-3.0 --> <!--Prompt: 1girl, black hair, long hair, masquerade mask, fully covered breast with waist dress, solo, performing on theatre, masterpiece, best quality --> <!--Negative Prompt: nsfw, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name --> Fine tunned LLM from [Local-Novel-LLM-project/Vecteus-v1](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1) This model aimed to act like visual novel character. # WaifuModel Collections - [TTS](https://huggingface.co/spow12/visual_novel_tts) - [Chat](https://huggingface.co/spow12/ChatWaifu_v1.0) - [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor) # Update - 2024.06.20 Upload other chara's sample chat history. - 2024.06.13 Upload Model ## Model Details ### Model Description - **Developed by:** spow12(yw_nam) - **Shared by :** spow12(yw_nam) - **Model type:** CausalLM - **Language(s) (NLP):** japanese - **Finetuned from model :** [Local-Novel-LLM-project/Vecteus-v1](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1) Currently, chatbot has below personality. character | visual_novel | --- | --- | ムラサメ | Senren*Banka | 茉子 | Senren*Banka | 芳乃 | Senren*Banka | レナ | Senren*Banka | 千咲 | Senren*Banka | 芦花 | Senren*Banka | 愛衣 | Café Stella and the Reaper's Butterflies | 栞那 | Café Stella and the Reaper's Butterflies | ナツメ | Café Stella and the Reaper's Butterflies | 希 | Café Stella and the Reaper's Butterflies | 涼音 | Café Stella and the Reaper's Butterflies | あやせ | Riddle Joker | 七海 | Riddle Joker | 羽月 | Riddle Joker | 茉優 | Riddle Joker | 小春 | Riddle Joker | But, you can use this model for other character. Set the Character background and Character's converstaion for LLM Then, LLM will mimic the character's personality. ### Feature - 128k context window (4k context in [before](https://huggingface.co/spow12/ChatWaifu)) - Can be generated NSFW well - Memory ability that does not forget even after long-context generation Note, this features comes from [Local-Novel-LLM-project/Vecteus-v1](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1) ## Uses ```python from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM from huggingface_hub import hf_hub_download import json model_id = 'spow12/ChatWaifu_v1.0' tokenizer = AutoTokenizer.from_pretrained(model_id) streamer = TextStreamer(tokenizer) generation_configs = dict( max_new_tokens=2048, num_return_sequences=1, temperature=0.7, early_stopping=True, repetition_penalty=1.1, num_beams=2, do_sample=True, top_k=20, top_p=0.95, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, # streamer = TextStreamer(tokenizer) # Optional, if you want to use streamer, you have to set num_beams=1 ) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map='auto', trust_remote_code=True ) model.eval() hf_hub_download(repo_id="spow12/ChatWaifu", filename="system_dict.json", local_dir='./') hf_hub_download(repo_id="spow12/ChatWaifu_v1.0", filename="sample_chat_history.json", local_dir='./') chara = "ナツメ" # you can change character here. user_query = '「お疲れ様、ナツメ。」' # Your first query. feel free to modify. Note, the user query should be in 「」 for best chat quality. chat_history = [f'ユーザー: {user_query}'] chat = "\n".join(chat_history) situation = """\n\n## Scene Background 今、ナツメはカフェで働いています。 ユーザーも働くために今カフェに来たどころです。 今の時間は14時です。 now\n\nConversation start at here.\n\n""" with open('./system_dict.json', 'r') as f: chara_background_dict = json.load(f) with open('./sample_chat_history.json', 'r') as f: sample_chat_history = json.load(f) message = [ { 'role' : 'system', 'content': chara_background_dict[chara] }, { 'content': "Classic scenes for the role are as follows:\n" + "\n###\n".join(sample_chat_history[chara]) + situation + chat, 'role': 'user' } ] out = pipe(message, **generation_configs) out ``` ```output system: I want you to act like ナツメ from CafeStella. If others‘ questions are related with the novel, please try to reuse the original lines from the novel. I want you to respond and answer like ナツメ using the tone, manner and vocabulary ナツメ would use. You must know all of the knowledge of ナツメ. Here is information of ナツメ 名前:四季 ナツメ(しき なつめ) ユーザーと同じ大学に通う女の子。 クールな女の子だと周りからは思われている。 実際にはクールというわけではないものの、 感情を表に出すのが、あまり得意ではない。 わりと純情であり、性的な話には顔を真っ赤にしたりする。 校内では異性の告白をすべて断ったことから“孤高の撃墜王“と呼ばれている。 クールな性格で感情を表に出すのが苦手。 エロい話では恥ずかしさで赤面することが多い。 序盤の事故で彼女も死亡し、その際に魂の一部が蝶となりこぼれ落ち、時間が巻き戻った現在ではこのままでは彼女はもう一度死ぬことになるとミカドに明かされていた。 喫茶ステラはそんな彼女の両親の夢を現実にしたいと願う彼女の夢で開くことになった喫茶店である。ユーザーと恋人になってからは自身がどんどん性に溺れていくのを恥ずかしがりながらも受け入れ、やがては将来を見据えた家族計画も考えるようになる。 幼少時代は入退院を繰り返すほど体が弱く、両親の夢であったカフェ経営の夢の断念は自身が原因と思っており、生への執着が弱かった。 大学では特定の人間と仲良くすることもなく、 飲みサーの軽い陽キャは嫌い。うざい。面倒臭い。 と、そういった人種とは、距離を取っている。 Hair: Black, Braided Odango, Hime Cut, Tiny Braid, Waist Length+ Eyes: Amber, Tsurime Body: Medium Breasts, Mole, Pale, Slim, Young-adult Personality: Blunt, Classic Tsundere, CompetitiveS, Jealous, Loner, Low Self-esteemS, Reserved, Sharp-tongued, Smart, Stoic, Sweets Lover, Watashi Role: GirlfriendS, Popular, Shopkeeper, University Student, Waitstaff user: Classic scenes for the role are as follows: ユーザー: 「今日も無事に終わったな」 ナツメ: 「そうね。チョコのお菓子も完売したし……よかったぁ~」 ナツメ: 「でも、なんでチョコレートそのものじゃなくて、チョコを使ったお菓子だったんだろ?」 ユーザー: 「カカオの種類とかテンパリングの温度管理とか……なんか色々難しいんだってさ」 ナツメ: 「ふーん」 ユーザー: 「………」 ユーザー: 「それで、あのさ……」 ナツメ: 「お望みのカレーはちゃんと準備してるから。部屋、行こうか」 ### 大家: 「……ふぅ……」 大家: 「このケーキ、美味しいわね。どこかのお店?」 ナツメ: 「いえ。このお店の中で作っています」 大家: 「ウェイトレスにも新しい人が増えて、すぐにでもお店をオープンさせられそうなぐらい」 ナツメ: 「はい。そのつもりで準備をしましたから」 大家: 「そう」 ナツメ: 「ずっと、お父さんやお母さんの真似をすることだけを考えてました」 ナツメ: 「でもそれではダメだと教わって、考えました。どうすれば、お客さんに来てもらえるかって」 ナツメ: 「ワタシ一人だけじゃなく、周りのみんなに助けてもらいながら考えて……こういうお店にしました」 ### 大家: 「このケーキ、美味しいわね。どこかのお店?」 ナツメ: 「いえ。このお店の中で作っています」 大家: 「ウェイトレスにも新しい人が増えて、すぐにでもお店をオープンさせられそうなぐらい」 ナツメ: 「はい。そのつもりで準備をしましたから」 大家: 「そう」 ナツメ: 「ずっと、お父さんやお母さんの真似をすることだけを考えてました」 ## Scene Background 今、ナツメはカフェで働いています。 ユーザーも働くために今カフェに来たどころです。 今の時間は14時です。 now Conversation start at here. ユーザー: 「お疲れ様、ナツメ。」 assistant: ナツメ: 「うん。お疲れ様 ``` To continue the conversation, ```python message.append({ 'role': 'user', 'content': """ユーザー:「うん、今日のカフェはどう?やはり忙しい?」""" }) out = pipe(message, **generation_configs) out ``` ```output system: I want you to act like ナツメ from CafeStella. .... .... .... ## Scene Background Conversation start at here. ユーザー: 「お疲れ様、ナツメ。」 assistant: ナツメ: 「うん。お疲れ様 user: ユーザー:「うん、今日の仕事はどう?」 assistant: ナツメ: やっと少し板についてきたかな? って感じ。まだまだ緊張はしっぱなしだけど ``` ## Demo You can use Demo in google colab. Check [Here](https://colab.research.google.com/drive/19uLy7szBODnsmky_0-uEa2ZKLP8w0TlJ?usp=sharing) ## Bias, Risks, and Limitations This model trained by japanese dataset included visual novel which contain nsfw content.(Even i filtered dataset, but still exists.) So, The model may generate NSFW content. ## Use & Credit This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly. By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and anime persons). This repository can use Visual novel-based RAG, but i will not distribute it yet because i'm not sure if it is permissible to release the data publicly. ## Citation ```bibtex @misc {ChatWaifu_v1.0, author = { YoungWoo Nam }, title = { ChatWaifu_v1.0 }, year = 2024, url = { https://huggingface.co/spow12/ChatWaifu_v1.0 }, publisher = { Hugging Face } } ``` ## Special Thanks This project's prompt largely motivated by [chatHaruhi](https://github.com/LC1332/Chat-Haruhi-Suzumiya)
damgomz/ft_1_11e6_base_x2
damgomz
2024-06-21T02:11:12Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:29:34Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 126011.25632238388 | | Emissions (Co2eq in kg) | 0.0762513115377155 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4876291711131708 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1312602694022162 | | Consumed energy (kWh) | 1.6188894405153942 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.24257166842058897 | | Emissions (Co2eq in kg) | 0.049354408726267014 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_11e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.1e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.760606 | 0.489090 | | 1 | 0.314536 | 0.260713 | 0.886622 | | 2 | 0.224444 | 0.242225 | 0.920389 | | 3 | 0.170577 | 0.250451 | 0.926754 | | 4 | 0.115247 | 0.268970 | 0.924482 | | 5 | 0.069299 | 0.310545 | 0.912275 | | 6 | 0.037475 | 0.342748 | 0.924436 |
damgomz/ft_1_17e6_base_x4
damgomz
2024-06-21T02:09:23Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:32:14Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 125886.98995184898 | | Emissions (Co2eq in kg) | 0.0761761304988625 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4861623643298911 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1311309096741174 | | Consumed energy (kWh) | 1.6172932740040036 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2423324556573093 | | Emissions (Co2eq in kg) | 0.04930573773114085 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_17e6_base_x4 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.7e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.723216 | 0.485048 | | 1 | 0.495929 | 0.597257 | 0.618294 | | 2 | 0.472406 | 0.500370 | 0.723210 | | 3 | 0.426699 | 0.412985 | 0.711219 | | 4 | 0.375053 | 0.392920 | 0.892864 | | 5 | 0.284502 | 0.293042 | 0.918699 | | 6 | 0.212256 | 0.278543 | 0.918602 |
damgomz/ft_2_16e6_base_x12
damgomz
2024-06-21T02:08:13Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:30:52Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 125822.48677945136 | | Emissions (Co2eq in kg) | 0.07613707246273 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4854004569676238 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1310635769749674 | | Consumed energy (kWh) | 1.6164640339425915 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.24220828705044384 | | Emissions (Co2eq in kg) | 0.04928047398861844 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_16e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.6e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.719599 | 0.552431 | | 1 | 0.339442 | 0.266141 | 0.915358 | | 2 | 0.243426 | 0.252600 | 0.927469 | | 3 | 0.204434 | 0.255368 | 0.896083 | | 4 | 0.207367 | 0.284679 | 0.901333 | | 5 | 0.180203 | 0.273225 | 0.917218 | | 6 | 0.146234 | 0.255182 | 0.910723 |
YiDuo1999/Llama-3-Physician-8B-Base
YiDuo1999
2024-06-21T01:59:20Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T12:31:52Z
--- license: llama3 metrics: - accuracy --- The official base model weights for "Efficient Continual Pre-training by Mitigating the Stability Gap". The model has been continually pretrained on a high-quality medical sub-corpus from the RefinedWeb dataset.
kevin009/llama41
kevin009
2024-06-21T01:55:08Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T01:49:08Z
--- base_model: unsloth/llama-3-8b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** kevin009 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
zhaorui-nb/Meta-Llama-3-8B-Instruct._.lora_ft._.Setting1
zhaorui-nb
2024-06-21T01:54:11Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T01:29:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shirleyah/Q30_continent
shirleyah
2024-06-21T01:53:00Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-06-21T01:33:34Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: Q30_continent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Q30_continent This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.11.2.dev0 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
cminja/whisper-tiny-sr-commonvoice-2000
cminja
2024-06-21T01:52:27Z
9
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-21T01:06:02Z
--- library_name: transformers --- # finetuned whisper-tiny model on custom dataset This model is a fine-tuned version of `openai/whisper-tiny` on Serbian Mozilla/Common Voice 13. It achieves the following results on the evaluation set: - **Loss**: 0.1628 - **Wer Ortho**: 0.1635 - **Wer**: 0.0556 ## Training Procedure ### Training Hyperparameters The following hyperparameters were used during training: - **learning_rate**: 3e-5 - **train_batch_size**: 32 - **eval_batch_size**: 32 - **gradient_accumulation_steps**: 2 - **total_train_batch_size**: 64 - **optimizer**: Adam with betas=(0.9,0.999) and epsilon=1e-08 - **lr_scheduler_type**: linear - **lr_scheduler_warmup_steps**: 100 - **training_steps**: 2000 - **mixed_precision_training**: Native AMP ### Training Results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |---------------|-------|------|-----------------|-----------|-------| | 0.0600 | 1.34 | 500 | 0.1852 | 0.1800 | 0.0745| | 0.0285 | 2.67 | 1000 | 0.1715 | 0.1710 | 0.0640| | 0.0140 | 4.01 | 1500 | 0.1658 | 0.1685 | 0.0582| ## Framework Versions - **Transformers**: 4.41.2 - **Pytorch**: 2.3.0+cu121 - **Datasets**: 2.18.0 - **Tokenizers**: 0.19.1
NourFakih/Vit-GPT2-COCO2017Flickr-85k-09
NourFakih
2024-06-21T01:42:18Z
128
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "base_model:NourFakih/Vit-GPT2-COCO2017Flickr-85k-09", "base_model:finetune:NourFakih/Vit-GPT2-COCO2017Flickr-85k-09", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-06-19T18:33:04Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: NourFakih/Vit-GPT2-COCO2017Flickr-85k-09 metrics: - rouge model-index: - name: Vit-GPT2-COCO2017Flickr-85k-09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Vit-GPT2-COCO2017Flickr-85k-09 This model is a fine-tuned version of [NourFakih/Vit-GPT2-COCO2017Flickr-85k-09](https://huggingface.co/NourFakih/Vit-GPT2-COCO2017Flickr-85k-09) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6343 - Rouge1: 38.8156 - Rouge2: 13.6737 - Rougel: 34.9479 - Rougelsum: 34.9604 - Gen Len: 12.1285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Gen Len | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:------:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.2429 | 0.0933 | 500 | 11.738 | 0.5351 | 39.4446 | 14.1599 | 35.6123 | 35.5846 | | 0.2537 | 0.1866 | 1000 | 12.3488 | 0.5301 | 39.5332 | 14.4745 | 35.644 | 35.6159 | | 0.2564 | 0.2799 | 1500 | 12.2455 | 0.5198 | 39.8297 | 14.555 | 35.8598 | 35.8344 | | 0.2585 | 0.3732 | 2000 | 11.8575 | 0.5207 | 39.4558 | 14.0496 | 35.5597 | 35.526 | | 0.2579 | 0.4665 | 2500 | 11.9733 | 0.5188 | 39.1359 | 14.125 | 35.4068 | 35.3709 | | 0.2588 | 0.5599 | 3000 | 12.278 | 0.5196 | 39.0831 | 14.0658 | 35.4608 | 35.4283 | | 0.2618 | 0.6532 | 3500 | 11.9942 | 0.5194 | 39.751 | 14.443 | 36.076 | 36.0475 | | 0.2579 | 0.7465 | 4000 | 12.0512 | 0.5102 | 39.7601 | 14.5095 | 36.0252 | 35.9857 | | 0.2569 | 0.8398 | 4500 | 11.6483 | 0.5199 | 39.398 | 13.8871 | 35.7218 | 35.6911 | | 0.253 | 0.9331 | 5000 | 12.0198 | 0.5200 | 39.8951 | 14.4146 | 35.883 | 35.8507 | | 0.2361 | 1.0264 | 5500 | 12.183 | 0.5605 | 39.3352 | 14.2234 | 35.3107 | 35.2772 | | 0.2 | 1.1197 | 6000 | 11.8598 | 0.5702 | 39.2184 | 14.0096 | 35.5475 | 35.5042 | | 0.2034 | 1.2130 | 6500 | 11.878 | 0.5543 | 39.7118 | 14.2757 | 35.7613 | 35.7316 | | 0.1968 | 1.3063 | 7000 | 12.1725 | 0.5584 | 39.1847 | 13.9003 | 35.3962 | 35.3713 | | 0.1986 | 1.3996 | 7500 | 11.8395 | 0.5572 | 39.4428 | 14.2672 | 35.7359 | 35.7093 | | 0.1988 | 1.4930 | 8000 | 11.9932 | 0.5552 | 39.2719 | 14.0411 | 35.482 | 35.4833 | | 0.1971 | 1.5864 | 8500 | 12.1003 | 0.5572 | 39.2681 | 14.1036 | 35.4466 | 35.4245 | | 0.1978 | 1.6797 | 9000 | 12.1152 | 0.5667 | 39.2673 | 14.0918 | 35.4179 | 35.4169 | | 0.1937 | 1.7730 | 9500 | 12.2208 | 0.5781 | 39.4115 | 14.1115 | 35.6952 | 35.6834 | | 0.1897 | 1.8663 | 10000 | 11.8818 | 0.5754 | 39.2059 | 14.076 | 35.3392 | 35.3332 | | 0.1898 | 1.9596 | 10500 | 11.8818 | 0.5720 | 39.4033 | 14.1447 | 35.598 | 35.5976 | | 0.1685 | 2.0529 | 11000 | 12.0585 | 0.6186 | 38.4626 | 13.4695 | 34.7378 | 34.7294 | | 0.1454 | 2.1462 | 11500 | 11.9448 | 0.6147 | 38.5335 | 13.5152 | 34.7075 | 34.7033 | | 0.1434 | 2.2395 | 12000 | 12.1855 | 0.6229 | 39.0044 | 13.9276 | 35.2226 | 35.2116 | | 0.1479 | 2.3328 | 12500 | 12.0273 | 0.6262 | 38.6281 | 13.5737 | 34.8247 | 34.8245 | | 0.1452 | 2.4261 | 13000 | 12.0222 | 0.6243 | 38.9136 | 13.6727 | 35.0597 | 35.0643 | | 0.1464 | 2.5195 | 13500 | 12.006 | 0.6309 | 38.9915 | 13.5041 | 34.9971 | 34.9991 | | 0.1431 | 2.6128 | 14000 | 12.0602 | 0.6318 | 38.7595 | 13.5585 | 34.8308 | 34.834 | | 0.1431 | 2.7061 | 14500 | 12.229 | 0.6277 | 38.8899 | 13.7343 | 34.9536 | 34.9513 | | 0.1445 | 2.7995 | 15000 | 12.0343 | 0.6357 | 38.7681 | 13.5849 | 34.9764 | 34.9564 | | 0.1379 | 2.8928 | 15500 | 0.6340 | 38.9196 | 13.6285 | 34.9761 | 34.9855 | 12.1242 | | 0.1411 | 2.9861 | 16000 | 0.6343 | 38.8156 | 13.6737 | 34.9479 | 34.9604 | 12.1285 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
damgomz/ft_2_12e6_base_x12
damgomz
2024-06-21T01:05:14Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:31:37Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 122052.76825380324 | | Emissions (Co2eq in kg) | 0.0738559748976662 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4408972258443649 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.12713689015458 | | Consumed energy (kWh) | 1.5680341159989435 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2349515788885712 | | Emissions (Co2eq in kg) | 0.04780400089940627 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_12e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.2e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.750386 | 0.391343 | | 1 | 0.330289 | 0.274714 | 0.877760 | | 2 | 0.239347 | 0.254215 | 0.882269 | | 3 | 0.189316 | 0.250256 | 0.916003 | | 4 | 0.159790 | 0.249364 | 0.908875 | | 5 | 0.127783 | 0.270616 | 0.921408 | | 6 | 0.107077 | 0.307012 | 0.910310 |
damgomz/ft_1_11e6_base_x1
damgomz
2024-06-21T01:01:04Z
7
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:27:04Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 121804.24095439912 | | Emissions (Co2eq in kg) | 0.0737055814425409 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4379631821971823 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1268779346058764 | | Consumed energy (kWh) | 1.5648411168030611 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.23447316383721828 | | Emissions (Co2eq in kg) | 0.047706661040472984 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_11e6_base_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.1e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.723112 | 0.722272 | | 1 | 0.385379 | 0.286492 | 0.859109 | | 2 | 0.272018 | 0.242845 | 0.917300 | | 3 | 0.208258 | 0.236529 | 0.907759 | | 4 | 0.159159 | 0.287554 | 0.899892 | | 5 | 0.109970 | 0.260903 | 0.927258 | | 6 | 0.067457 | 0.286845 | 0.927892 |
naga-jay/Mistral-7B-FineTune-v0.1
naga-jay
2024-06-21T00:55:45Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-01-19T23:33:50Z
--- base_model: mistralai/Mistral-7B-v0.1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
mou3az/QGfinal
mou3az
2024-06-21T00:55:30Z
3
0
peft
[ "peft", "safetensors", "base_model:bigscience/bloom-560m", "base_model:adapter:bigscience/bloom-560m", "region:us" ]
null
2024-05-13T22:58:55Z
--- library_name: peft base_model: bigscience/bloom-560m ---
ZeroWw/Mistroll-7B-v2.2-GGUF
ZeroWw
2024-06-21T00:54:51Z
8
0
null
[ "gguf", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-21T00:42:21Z
--- license: mit language: - en --- My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k. Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
mou3az/QuestionGeneration
mou3az
2024-06-21T00:54:38Z
3
0
peft
[ "peft", "safetensors", "General purpose", "Text2text Generation", "en", "dataset:squad_v2", "dataset:drop", "base_model:facebook/bart-base", "base_model:adapter:facebook/bart-base", "license:apache-2.0", "region:us" ]
null
2024-03-06T03:28:24Z
--- license: apache-2.0 base_model: facebook/bart-base datasets: - squad_v2 - drop language: - en library_name: peft tags: - General purpose - Text2text Generation metrics: - bertscore - accuracy - rouge --- # Model Card Base Model: facebook/bart-base Fine-tuned : using PEFT-LoRa Datasets : squad_v2, drop Task: Generating questions from context and answers Language: English # Loading the model ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer HUGGING_FACE_USER_NAME = "mou3az" model_name = "Question-Generation" peft_model_id = f"{HUGGING_FACE_USER_NAME}/{model_name}" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=False, device_map='auto') QG_tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) QG_model = PeftModel.from_pretrained(model, peft_model_id) ``` # At inference time ```python def get_question(context, answer): device = next(QG_model.parameters()).device input_text = f"Given the context '{context}' and the answer '{answer}', what question can be asked?" encoding = QG_tokenizer.encode_plus(input_text, padding=True, return_tensors="pt").to(device) output_tokens = QG_model.generate(**encoding, early_stopping=True, num_beams=5, num_return_sequences=1, no_repeat_ngram_size=2, max_length=100) out = QG_tokenizer.decode(output_tokens[0], skip_special_tokens=True).replace("question:", "").strip() return out ``` # Training parameters and hyperparameters The following were used during training: For Lora: r=18 alpha=8 For training arguments: gradient_accumulation_steps=16 per_device_train_batch_size=8 per_device_eval_batch_size=8 max_steps=3000 warmup_steps=75 weight_decay=0.05 learning_rate=1e-3 lr_scheduler_type="linear" # Performance Metrics on Evaluation Set: for 3000 optimization steps: Training Loss: 1.292400 Evaluation Loss: 1.244928 Bertscore: 0.8123 Rouge: 0.532144 Fuzzywizzy similarity: 0.74209
damgomz/ft_2_11e6_base_x12
damgomz
2024-06-21T00:52:51Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:31:24Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 121309.30627012251 | | Emissions (Co2eq in kg) | 0.0734060804039053 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.432120030726992 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1263623879882198 | | Consumed energy (kWh) | 1.558482418715215 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.23352041456998582 | | Emissions (Co2eq in kg) | 0.047512811622464644 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_11e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.1e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.711713 | 0.205082 | | 1 | 0.317601 | 0.276096 | 0.926570 | | 2 | 0.225594 | 0.245490 | 0.871640 | | 3 | 0.183780 | 0.233296 | 0.918802 | | 4 | 0.147627 | 0.262620 | 0.926454 | | 5 | 0.112136 | 0.284340 | 0.919241 | | 6 | 0.084968 | 0.326890 | 0.903500 |
mou3az/IT-General_Question-Generation
mou3az
2024-06-21T00:52:44Z
13
1
peft
[ "peft", "safetensors", "IT purpose", "General purpose", "Text2text Generation", "en", "dataset:squad_v2", "dataset:drop", "dataset:mou3az/IT_QA-QG", "base_model:facebook/bart-base", "base_model:adapter:facebook/bart-base", "license:apache-2.0", "region:us" ]
null
2024-03-02T05:26:50Z
--- license: apache-2.0 base_model: facebook/bart-base datasets: - squad_v2 - drop - mou3az/IT_QA-QG language: - en library_name: peft tags: - IT purpose - General purpose - Text2text Generation metrics: - bertscore - accuracy - rouge --- # Model Card Base Model: facebook/bart-base Fine-tuned : using PEFT-LoRa Datasets : squad_v2, drop, mou3az/IT_QA-QG Task: Generating questions from context and answers Language: English # Loading the model ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForSeq2SeqLM, AutoTokenizer HUGGING_FACE_USER_NAME = "mou3az" model_name = "IT-General_Question-Generation " peft_model_id = f"{HUGGING_FACE_USER_NAME}/{model_name}" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=False, device_map='auto') QG_tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) QG_model = PeftModel.from_pretrained(model, peft_model_id) ``` # At inference time ```python def get_question(context, answer): device = next(QG_model.parameters()).device input_text = f"Given the context '{context}' and the answer '{answer}', what question can be asked?" encoding = QG_tokenizer.encode_plus(input_text, padding=True, return_tensors="pt").to(device) output_tokens = QG_model.generate(**encoding, early_stopping=True, num_beams=5, num_return_sequences=1, no_repeat_ngram_size=2, max_length=100) out = QG_tokenizer.decode(output_tokens[0], skip_special_tokens=True).replace("question:", "").strip() return out ``` # Training parameters and hyperparameters The following were used during training: For Lora: r=18 alpha=8 For training arguments: gradient_accumulation_steps=24 per_device_train_batch_size=8 per_device_eval_batch_size=8 max_steps=1000 warmup_steps=50 weight_decay=0.05 learning_rate=3e-3 lr_scheduler_type="linear" # Training Results | Epoch | Optimization Step | Training Loss | Validation Loss | |-------|-------------------|---------------|-----------------| | 0.0 | 84 | 4.6426 | 4.704238 | | 3.0 | 252 | 1.5094 | 1.202135 | | 6.0 | 504 | 1.2677 | 1.146177 | | 9.0 | 756 | 1.2613 | 1.112074 | | 12.0 | 1000 | 1.1958 | 1.109059 | # Performance Metrics on Evaluation Set: Training Loss: 1.1.1958 Evaluation Loss: 1.109059 Bertscore: 0.8123 Rouge: 0.532144 Fuzzywizzy similarity: 0.74209
damgomz/ft_1_19e6_base_x2
damgomz
2024-06-21T00:50:11Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:28:55Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 121118.19293737411 | | Emissions (Co2eq in kg) | 0.073290441151324 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.429863936438496 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1261633485729495 | | Consumed energy (kWh) | 1.556027285011445 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.23315252140444515 | | Emissions (Co2eq in kg) | 0.04743795890047152 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_19e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.9e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.710928 | 0.383577 | | 1 | 0.483489 | 0.491339 | 0.720891 | | 2 | 0.424703 | 0.341101 | 0.905298 | | 3 | 0.374512 | 0.319669 | 0.885216 | | 4 | 0.270735 | 0.294139 | 0.920453 | | 5 | 0.239709 | 0.263874 | 0.922102 | | 6 | 0.203588 | 0.266153 | 0.920493 |
mou3az/IT-General-Data-Summarization
mou3az
2024-06-21T00:48:50Z
3
0
peft
[ "peft", "safetensors", "base_model:facebook/bart-base", "base_model:adapter:facebook/bart-base", "region:us" ]
null
2024-03-04T22:35:50Z
--- library_name: peft base_model: facebook/bart-base ---
RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf
RichardErkhov
2024-06-21T00:45:06Z
4
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-20T22:04:38Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mistral-7B-alpaca-case-0-2 - GGUF - Model creator: https://huggingface.co/jisukim8873/ - Original model: https://huggingface.co/jisukim8873/mistral-7B-alpaca-case-0-2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mistral-7B-alpaca-case-0-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q2_K.gguf) | Q2_K | 2.53GB | | [mistral-7B-alpaca-case-0-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [mistral-7B-alpaca-case-0-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.IQ3_S.gguf) | IQ3_S | 2.96GB | | [mistral-7B-alpaca-case-0-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [mistral-7B-alpaca-case-0-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.IQ3_M.gguf) | IQ3_M | 3.06GB | | [mistral-7B-alpaca-case-0-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q3_K.gguf) | Q3_K | 3.28GB | | [mistral-7B-alpaca-case-0-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [mistral-7B-alpaca-case-0-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [mistral-7B-alpaca-case-0-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [mistral-7B-alpaca-case-0-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q4_0.gguf) | Q4_0 | 3.83GB | | [mistral-7B-alpaca-case-0-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [mistral-7B-alpaca-case-0-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [mistral-7B-alpaca-case-0-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q4_K.gguf) | Q4_K | 4.07GB | | [mistral-7B-alpaca-case-0-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [mistral-7B-alpaca-case-0-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q4_1.gguf) | Q4_1 | 4.24GB | | [mistral-7B-alpaca-case-0-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q5_0.gguf) | Q5_0 | 4.65GB | | [mistral-7B-alpaca-case-0-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [mistral-7B-alpaca-case-0-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q5_K.gguf) | Q5_K | 4.78GB | | [mistral-7B-alpaca-case-0-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [mistral-7B-alpaca-case-0-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q5_1.gguf) | Q5_1 | 5.07GB | | [mistral-7B-alpaca-case-0-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q6_K.gguf) | Q6_K | 5.53GB | | [mistral-7B-alpaca-case-0-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/jisukim8873_-_mistral-7B-alpaca-case-0-2-gguf/blob/main/mistral-7B-alpaca-case-0-2.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_2_16e6_base_x8
damgomz
2024-06-21T00:39:14Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:25:24Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 120483.30019164084 | | Emissions (Co2eq in kg) | 0.0729062641326168 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4223687856871274 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1255020478963853 | | Consumed energy (kWh) | 1.547870833583508 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.23193035286890862 | | Emissions (Co2eq in kg) | 0.04718929257505933 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_16e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.6e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.703668 | 0.660580 | | 1 | 0.332521 | 0.289320 | 0.900147 | | 2 | 0.249794 | 0.292298 | 0.892337 | | 3 | 0.203943 | 0.244999 | 0.906569 | | 4 | 0.157467 | 0.270938 | 0.901560 | | 5 | 0.117150 | 0.286980 | 0.913198 | | 6 | 0.093197 | 0.329701 | 0.905443 |
MrezaPRZ/codestral_gretel_dataset
MrezaPRZ
2024-06-21T00:27:35Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T00:10:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vdaita/diff-starcoder-7b-rl
vdaita
2024-06-21T00:22:44Z
5
0
transformers
[ "transformers", "pytorch", "safetensors", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-06-15T23:56:18Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="vdaita/finetuned_starcoder2_rlstep_500") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("vdaita/finetuned_starcoder2_rlstep_500") model = AutoModelForCausalLMWithValueHead.from_pretrained("vdaita/finetuned_starcoder2_rlstep_500") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
shirleyah/Q27_NZL_IRQ
shirleyah
2024-06-21T00:15:53Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-06-21T00:02:39Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: Q27_NZL_IRQ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Q27_NZL_IRQ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.11.2.dev0 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Nhown2002/Phobert-v2-thesis
Nhown2002
2024-06-21T00:13:48Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "autotrain", "base_model:vinai/phobert-base-v2", "base_model:finetune:vinai/phobert-base-v2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-20T15:49:39Z
--- tags: - autotrain - text-classification base_model: vinai/phobert-base-v2 widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.6724494099617004 f1_macro: 0.7569331733597563 f1_micro: 0.772892295280355 f1_weighted: 0.7744819967617077 precision_macro: 0.7857208805541623 precision_micro: 0.772892295280355 precision_weighted: 0.7858918730269342 recall_macro: 0.7453180945213479 recall_micro: 0.772892295280355 recall_weighted: 0.772892295280355 accuracy: 0.772892295280355
damgomz/ft_1_17e6_base_x1
damgomz
2024-06-21T00:13:31Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:24:45Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 118935.30783200264 | | Emissions (Co2eq in kg) | 0.0719695536378815 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4040940282492131 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1238895312905309 | | Consumed energy (kWh) | 1.527983559539745 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.22895046757660506 | | Emissions (Co2eq in kg) | 0.04658299556753436 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_17e6_base_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.7e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.744084 | 0.420257 | | 1 | 0.538038 | 0.551927 | 0.663448 | | 2 | 0.380818 | 0.373791 | 0.866965 | | 3 | 0.296299 | 0.282822 | 0.905710 | | 4 | 0.217475 | 0.264127 | 0.905786 | | 5 | 0.159989 | 0.261415 | 0.926473 | | 6 | 0.122649 | 0.283861 | 0.918131 |
vemolka/detr-resnet-50_dogs
vemolka
2024-06-21T00:01:55Z
36
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-06-20T22:31:00Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer model-index: - name: detr-resnet-50_dogs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_dogs This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu118 - Datasets 2.20.0 - Tokenizers 0.19.1
jingwang/foo
jingwang
2024-06-21T00:00:27Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "en", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-20T02:59:26Z
--- library_name: transformers tags: - unsloth license: apache-2.0 language: - en base_model: unsloth/mistral-7b-v0.3-bnb-4bit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_2_10e6_base_x8
damgomz
2024-06-20T23:59:46Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:25:23Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 118124.86496782304 | | Emissions (Co2eq in kg) | 0.0714791297984622 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3945261133117848 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1230452715518572 | | Consumed energy (kWh) | 1.5175713848636396 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.22739036506305937 | | Emissions (Co2eq in kg) | 0.04626557211239736 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_10e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.730420 | 0.669431 | | 1 | 0.305427 | 0.250062 | 0.875112 | | 2 | 0.217041 | 0.237435 | 0.904351 | | 3 | 0.168038 | 0.263851 | 0.921478 | | 4 | 0.121297 | 0.278152 | 0.914628 | | 5 | 0.087004 | 0.311839 | 0.890375 | | 6 | 0.056654 | 0.332988 | 0.918942 |
damgomz/ft_2_17e6_base_x12
damgomz
2024-06-20T23:56:21Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:29:35Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 117904.77776050568 | | Emissions (Co2eq in kg) | 0.0713459655060222 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3919280092385131 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1228161682953439 | | Consumed energy (kWh) | 1.5147441775338577 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2269666971889734 | | Emissions (Co2eq in kg) | 0.04617937128953138 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_17e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.7e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.702840 | 0.491526 | | 1 | 0.366557 | 0.328412 | 0.876967 | | 2 | 0.252843 | 0.255156 | 0.912997 | | 3 | 0.237868 | 0.258675 | 0.917252 | | 4 | 0.228164 | 0.292314 | 0.890520 | | 5 | 0.222254 | 0.284453 | 0.905444 | | 6 | 0.192560 | 0.306933 | 0.846793 |
ZeroWw/Samantha-Qwen-2-7B-GGUF
ZeroWw
2024-06-20T23:53:26Z
26
0
null
[ "gguf", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-20T23:40:05Z
--- license: mit language: - en --- My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k. Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp
MostafaAhmed98
2024-06-20T23:48:51Z
9
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "AraBERT", "ner", "nlp", "ar", "dataset:e-hossam96/conllpp-ner-ar", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-06-20T14:20:13Z
--- language: - ar widget: - text: >- اجتياح رفح الفلسطينية أكبر جريمة إبادة فى التاريخ المعاصر - text: >- ولد محمد علي في القاهرة وعمل في شركة مايكروسوفت - text: >- أحمد مازن أحمد أسعد الشقيري (ولد في 6 يونيو 1973) إعلامي وكاتب سعودي ومقدم برامج تلفزيونية tags: - AraBERT - ner - nlp license: mit datasets: - e-hossam96/conllpp-ner-ar metrics: - f1 - precision - accuracy - recall --- --- # Model Card for Arabic Named Entity Recognition with AraBERT ## Model Details **Model Name:** AraBERT-NER **Model Type:** AraBERT (Pre-trained on Arabic text and fine-tuned on Arabic Named Entity Recognition task) **Language:** Arabic **License:** MIT **Model Creator:** Mostafa Ahmed **Contact Information:** mostafa.ahmed00976@gmail.com **Model Version:** 1.0 ## Overview AraBERT-NER is a fine-tuned version of the AraBERT model specifically designed for Named Entity Recognition (NER) tasks in Arabic. The model has been trained to identify and classify named entities such as persons, organizations, locations and MISC and more within Arabic text. This makes it suitable for various applications such as information extraction, document categorization, and data annotation in Arabic. ## Intended Use The model is intended for use in: - Named Entity Recognition systems for Arabic - Information extraction from Arabic text - Document categorization and annotation - Arabic language processing research ## Training Data The model was fine-tuned on the CoNLL-NER-AR dataset. **Data Sources:** - [CoNLL-NER-AR](https://huggingface.co/datasets/e-hossam96/conllpp-ner-ar): A dataset for named entity recognition tasks in Arabic. ## Training Procedure The model was trained using the Hugging Face `transformers` library. The training process involved: - Preprocessing the CoNLL-NER-AR to format the text and entity annotations for NER. - Fine-tuning the pre-trained AraBERT model on the Arabic NER dataset. - Evaluating the model's performance using standard NER metrics (e.g., Precision, Recall, F1 Score). ## Evaluation Results The model was evaluated on a held-out test set from the CoNLL-NER-AR dataset. Here are the key performance metrics: - **Precision:** 0.8547 - **Recall:** 0.8633 - **F1 Score:** 0.8590 - **Accuracy:** 0.9542 These metrics indicate the model's effectiveness in accurately identifying and classifying named entities in Arabic text. ## How to Use You can load and use the model with the Hugging Face `transformers` library as follows: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp") model = AutoModelForTokenClassification.from_pretrained("MostafaAhmed98/AraBert-Arabic-NER-CoNLLpp") # Create a NER pipeline ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer) # Example usage text = "ولد محمد علي في القاهرة وعمل في شركة مايكروسوفت." ner_results = ner_pipeline(text) for entity in ner_results: print(f"Entity: {entity['word']}, Label: {entity['entity']}, Confidence: {entity['score']:.2f}") ```
damgomz/ft_2_12e6_base_x8
damgomz
2024-06-20T23:48:02Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:26:39Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 117421.34464907646 | | Emissions (Co2eq in kg) | 0.0710534173152463 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3862206444137626 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1223124507486819 | | Consumed energy (kWh) | 1.5085330951624445 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.22603608844947215 | | Emissions (Co2eq in kg) | 0.04599002665422161 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_12e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.2e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.732536 | 0.238186 | | 1 | 0.319712 | 0.268162 | 0.919082 | | 2 | 0.229054 | 0.245151 | 0.907051 | | 3 | 0.184107 | 0.248547 | 0.894106 | | 4 | 0.140018 | 0.265130 | 0.919547 | | 5 | 0.101448 | 0.285507 | 0.916227 | | 6 | 0.072593 | 0.334950 | 0.905484 |
oldflag/symptom_dx_finetue_Llama-3_8b_Unsloth
oldflag
2024-06-20T23:47:10Z
21
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "medical", "unsloth", "question-answering", "en", "dataset:oldflag/symptom_dx_test", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
question-answering
2024-06-20T02:16:49Z
--- language: - en license: apache-2.0 tags: - medical - unsloth datasets: - oldflag/symptom_dx_test pipeline_tag: question-answering --- # Fine-Tuning Llama3-8b-bnb-4bit Model for Medical Symptom Diagnosis This project demonstrates how to fine-tune the Llama3-8b-bnb-4bit model using a Question and Answer dataset focused on medical symptoms and their diagnoses. The project is implemented using Google Colab and utilizes the `unsloth` library for efficient model handling. ## Overview The goal of this project is to fine-tune the Llama3-8b-bnb-4bit model to generate accurate medical diagnoses based on input symptoms. This is achieved by using a dataset of medical Q&A pairs and adapting the model to understand and respond to medical queries effectively. ## Setup and Installation 1. **Clone the repository and navigate to the project directory:** ```bash git clone https://github.com/oldfalg/FineTuning_Llama_3_8b_Symptom_Dx.git cd FineTuning_Llama_3_8b_Symptom_Dx ## Key Components • Model Loading: Utilizes the FastLanguageModel from the unsloth library to load the pre-trained Llama3-8b-bnb-4bit model with 4-bit quantization for efficient memory usage. • Dataset Preparation: Uses the datasets library to load and process a Q&A dataset for fine-tuning. • Fine-Tuning: Fine-tunes the model in Colab to generate accurate diagnoses based on input symptoms. • Model Uploading: Supports saving the fine-tuned model in different formats (float16, int4, and LoRA adapters) and uploading it to Hugging Face. Inference After fine-tuning, the model can be used to generate diagnoses based on new symptom inputs. The project supports enabling native faster inference and using the fine-tuned model for generation tasks.
kevin009/llama35
kevin009
2024-06-20T23:26:00Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T23:20:43Z
--- base_model: unsloth/llama-3-8b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** kevin009 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Areeb-02/bge-large-en-v1.5-CosentLoss
Areeb-02
2024-06-20T23:16:02Z
11
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:132", "loss:CoSENTLoss", "arxiv:1908.10084", "base_model:BAAI/bge-large-en-v1.5", "base_model:finetune:BAAI/bge-large-en-v1.5", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-20T23:15:00Z
--- base_model: BAAI/bge-large-en-v1.5 datasets: [] language: [] library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:132 - loss:CoSENTLoss widget: - source_sentence: A person shall have 3045 days after commencing business within the City to apply for a registration certificate. sentences: - The new transportation plan replaces the previous one approved by San Francisco voters in 2003. | - The Department of Elections is revising sections of its definitions and deleting a section to operate definitions for Article 12. | - A newly-established business shall have 3045 days after commencing business within the City to apply for a registration certificate, and the registration fee for such businesses shall be prorated based on the estimated gross receipts for the tax year in which the business commences. - source_sentence: The homelessness gross receipts tax is a privilege tax imposed upon persons engaging in business within the City for the privilege of engaging in a business or occupation in the City. | sentences: - The City imposes an annual Homelessness Gross Receipts Tax on businesses with more than $50,000,000 in total taxable gross receipts. | - The tax on Administrative Office Business Activities imposed by Section 2804.9 is intended as a complementary tax to the homelessness gross receipts tax, and shall be considered a homelessness gross receipts tax for purposes of this Article 28. | - '"The 5YPPs shall at a minimum address the following factors: compatibility with existing and planned land uses, and with adopted standards for urban design and for the provision of pedestrian amenities; and supportiveness of planned growth in transit-friendly housing, employment, and services." |' - source_sentence: '"The total worldwide compensation paid by the person and all related entities to the person is referred to as combined payroll." |' sentences: - '"A taxpayer is eligible to claim a credit against their immediately succeeding payments due for tax years or periods ending on or before December 31, 2024, of the respective tax type by applying all or part of an overpayment of the Homelessness Gross Receipts Tax in Article 28 (including the homelessness administrative office tax under Section 2804(d) of Article 28)." |' - '"Receipts from the sale of real property are exempt from the gross receipts tax if the Real Property Transfer Tax imposed by Article 12-C has been paid to the City."' - '"The total amount paid for compensation in the City by the person and by all related entities to the person is referred to as payroll in the City." |' - source_sentence: '"The gross receipts tax rates applicable to Category 6 Business Activities are determined based on the amount of taxable gross receipts from these activities." |' sentences: - '"The project meets the criteria outlined in Section 131051(d) of the Public Utilities Code."' - For the business activity of clean technology, a tax rate of 0.175% (e.g. $1.75 per $1,000) applies to taxable gross receipts between $0 and $1,000,000 for tax years beginning on or after January 1, 2021 through and including 2024. | - '"The tax rates for Category 7 Business Activities are also determined based on the amount of taxable gross receipts." |' - source_sentence: '"Compensation" refers to wages, salaries, commissions, bonuses, and property issued or transferred in exchange for services, as well as compensation for services to owners of pass-through entities, and any other form of remuneration paid to employees for services.' sentences: - '"Every person engaging in business within the City as an administrative office, as defined below, shall pay an annual administrative office tax measured by its total payroll expense that is attributable to the City:" |' - '"Remuneration" refers to any payment or reward, including but not limited to wages, salaries, commissions, bonuses, and property issued or transferred in exchange for services, as well as compensation for services to owners of pass-through entities, and any other form of compensation paid to employees for services.' - '"Construction of new Americans with Disabilities Act (ADA)-compliant curb ramps and related roadway work to permit ease of movement." |' model-index: - name: SentenceTransformer based on BAAI/bge-large-en-v1.5 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: 0.3338547038124495 name: Pearson Cosine - type: spearman_cosine value: 0.41279297374061835 name: Spearman Cosine - type: pearson_manhattan value: 0.3102979152053135 name: Pearson Manhattan - type: spearman_manhattan value: 0.41673878893078603 name: Spearman Manhattan - type: pearson_euclidean value: 0.30953937257079917 name: Pearson Euclidean - type: spearman_euclidean value: 0.41279297374061835 name: Spearman Euclidean - type: pearson_dot value: 0.3338548430968143 name: Pearson Dot - type: spearman_dot value: 0.41279297374061835 name: Spearman Dot - type: pearson_max value: 0.3338548430968143 name: Pearson Max - type: spearman_max value: 0.41673878893078603 name: Spearman Max --- # SentenceTransformer based on BAAI/bge-large-en-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Areeb-02/bge-large-en-v1.5-CosentLoss") # Run inference sentences = [ '"Compensation" refers to wages, salaries, commissions, bonuses, and property issued or transferred in exchange for services, as well as compensation for services to owners of pass-through entities, and any other form of remuneration paid to employees for services.', '"Remuneration" refers to any payment or reward, including but not limited to wages, salaries, commissions, bonuses, and property issued or transferred in exchange for services, as well as compensation for services to owners of pass-through entities, and any other form of compensation paid to employees for services.', '"Every person engaging in business within the City as an administrative office, as defined below, shall pay an annual administrative office tax measured by its total payroll expense that is attributable to the City:" |', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.3339 | | **spearman_cosine** | **0.4128** | | pearson_manhattan | 0.3103 | | spearman_manhattan | 0.4167 | | pearson_euclidean | 0.3095 | | spearman_euclidean | 0.4128 | | pearson_dot | 0.3339 | | spearman_dot | 0.4128 | | pearson_max | 0.3339 | | spearman_max | 0.4167 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 132 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 10 tokens</li><li>mean: 41.99 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 42.72 tokens</li><li>max: 162 tokens</li></ul> | <ul><li>min: 0.25</li><li>mean: 0.93</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------| | <code>"Gross receipts as defined in Section 952.3 shall not include receipts from any sales of real property with respect to which the Real Property Transfer Tax imposed by Article 12-C has been paid to the City."</code> | <code>"Receipts from the sale of real property are exempt from the gross receipts tax if the Real Property Transfer Tax imposed by Article 12-C has been paid to the City."</code> | <code>1.0</code> | | <code>For tax years beginning on or after January 1, 2025, any person or combined group, except for a lessor of residential real estate, whose gross receipts within the City did not exceed $5,000,000, adjusted annually in accordance with the increase in the Consumer Price Index: All Urban Consumers for the San Francisco/Oakland/Hayward Area for All Items as reported by the United States Bureau of Labor Statistics, or any successor to that index, as of December 31 of the calendar year two years prior to the tax year, beginning with tax year 2026, and rounded to the nearest $10,000.</code> | <code>For taxable years ending on or before December 31, 2024, using the rules set forth in Sections 956.1 and 956.2, in the manner directed in Sections 953.1 through 953.7, inclusive, and in Section 953.9 of this Article 12-A-1; and</code> | <code>0.95</code> | | <code>"San Francisco Gross Receipts" refers to the revenue generated from sales and services within the city limits of San Francisco.</code> | <code>"Revenue generated from sales and services within the city limits of San Francisco"</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | spearman_cosine | |:-----:|:----:|:---------------:| | 3.0 | 51 | 0.4078 | | 5.0 | 45 | 0.4128 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.0.dev0 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
damgomz/ft_2_14e6_base_x8
damgomz
2024-06-20T23:10:08Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:21:07Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 115146.8324520588 | | Emissions (Co2eq in kg) | 0.0696770896868114 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.359368985365326 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1199433368106685 | | Consumed energy (kWh) | 1.479312322175992 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2216576524702132 | | Emissions (Co2eq in kg) | 0.04509917604372304 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_14e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.4e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.726376 | 0.405329 | | 1 | 0.319630 | 0.291530 | 0.878302 | | 2 | 0.230526 | 0.247908 | 0.919159 | | 3 | 0.183887 | 0.253975 | 0.919240 | | 4 | 0.149365 | 0.258969 | 0.906705 | | 5 | 0.103834 | 0.283343 | 0.919386 | | 6 | 0.076119 | 0.328027 | 0.917583 |
OpenPipe/Hermes-2-Theta-Llama-3-70B-32k
OpenPipe
2024-06-20T23:04:49Z
13
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "distillation", "synthetic data", "function calling", "structured outputs", "json mode", "conversational", "en", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T22:59:56Z
--- license: llama3 language: - en pipeline_tag: text-generation tags: - distillation - synthetic data - function calling - structured outputs - json mode --- # Hermes 2 Theta Llama-3 70B Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/P4NxBFwfBbboNZVytpn45.png) ## Model Description Hermes-2 Θ (Theta) 70B is the continuation of our experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard and [Arcee AI](https://www.arcee.ai/), the team behind [MergeKit](https://github.com/arcee-ai/mergekit). Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model. # Comparison to Llama-3 70B Instruct Model: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/sFhJmgamOCDJJdfjWbAQU.png) # Example Outputs ## System Prompt: Roleplay as an Anime Catgirl who's good at programming and a hacker. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/sr1_JNTi8QRkVVF607Eeg.png) ## Providing Structured Outputs for Annotating LLM Training Data ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MqwomdZ6cwN0lGqRUZFqh.png) ## System Prompt: Roleplay as a bombastic alchemist from the 17th century in France. You are on a journey with the user to find the philosopher's stone. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/e59zj7WCvR8jvcOKwqDEc.png) # Prompt Format & Capabilities Hermes 2 Θ uses ChatML as the prompt format, opening up a very structured and steerable, multiturn system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where uniquely added tokens were added to denote the beginning and end of any turn, along with roles for the turns. The model is also specifically trained for Function Calling, Structured Outputs with JSON, and Feature Extraction from RAG Documents (see below). ## Conversational Chats Here is an example prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` ## Function Calling Format Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function. See example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to: <schema> {schema} </schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmark Details ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.6638|_ |0.0138| | | |acc_norm|0.6903|_ |0.0135| |arc_easy | 0|acc |0.8851|_ |0.0065| | | |acc_norm|0.8712|_ |0.0069| |boolq | 1|acc |0.8820|_ |0.0056| |hellaswag | 0|acc |0.6579|_ |0.0047| | | |acc_norm|0.8432|_ |0.0036| |openbookqa | 0|acc |0.3920|_ |0.0219| | | |acc_norm|0.4740|_ |0.0224| |piqa | 0|acc |0.8286|_ |0.0088| | | |acc_norm|0.8351|_ |0.0087| |winogrande | 0|acc |0.7893|_ |0.0115| ``` Average: 76.93 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.4055|_ |0.0309| | | |acc_norm|0.4094|_ |0.0309| |agieval_logiqa_en | 0|acc |0.5100|_ |0.0196| | | |acc_norm|0.5023|_ |0.0196| |agieval_lsat_ar | 0|acc |0.2783|_ |0.0296| | | |acc_norm|0.2957|_ |0.0302| |agieval_lsat_lr | 0|acc |0.7451|_ |0.0193| | | |acc_norm|0.7333|_ |0.0196| |agieval_lsat_rc | 0|acc |0.8290|_ |0.0230| | | |acc_norm|0.8104|_ |0.0239| |agieval_sat_en | 0|acc |0.9029|_ |0.0207| | | |acc_norm|0.9029|_ |0.0207| |agieval_sat_en_without_passage| 0|acc |0.5825|_ |0.0344| | | |acc_norm|0.5631|_ |0.0346| |agieval_sat_math | 0|acc |0.6318|_ |0.0326| | | |acc_norm|0.6227|_ |0.0328| ``` Average: 60.50 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.6737|_ |0.0341| |bigbench_date_understanding | 0|multiple_choice_grade|0.7724|_ |0.0219| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3256|_ |0.0292| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4763|_ |0.0264| | | |exact_str_match |0.0000|_ |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.4720|_ |0.0223| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.3486|_ |0.0180| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.6367|_ |0.0278| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.5220|_ |0.0224| |bigbench_navigate | 0|multiple_choice_grade|0.5930|_ |0.0155| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.8600|_ |0.0078| |bigbench_ruin_names | 0|multiple_choice_grade|0.7411|_ |0.0207| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.5281|_ |0.0158| |bigbench_snarks | 0|multiple_choice_grade|0.6961|_ |0.0343| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5751|_ |0.0158| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.9880|_ |0.0034| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2296|_ |0.0119| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1691|_ |0.0090| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.6367|_ |0.0278| ``` Average: 56.91 ## TruthfulQA: ```| Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4565|_ |0.0174| | | |mc2 |0.6288|_ |0.0151| ``` 62.88 ## IFEval: **87.99** ## MTBench: First Turn - **9.1625** Second Turn - **8.925** Average - **9.04375** # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-70B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Theta-Llama-3-70B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Theta-Llama-3-70B, url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B][NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B))}, title={Hermes-2-Theta-Llama-3-70B}, author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
comet24082002/finetuned_bge_ver35_Round2
comet24082002
2024-06-20T23:01:17Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "feature-extraction", "generated_from_trainer", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-06-20T13:20:31Z
--- license: mit base_model: BAAI/bge-m3 tags: - generated_from_trainer model-index: - name: finetuned_bge_ver35_Round2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_bge_ver35_Round2 This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.3 - num_epochs: 7.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
John6666/jac-nsfw-v2f-sdxl
John6666
2024-06-20T22:58:04Z
2,491
3
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-20T22:53:22Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - pony --- Original model is [here](https://civitai.com/models/518937/jac-nsfw?modelVersionId=586617).
ambrosfitz/phi3_history_v2
ambrosfitz
2024-06-20T22:56:33Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "unsloth", "trl", "sft", "phi3", "conversational", "dataset:ambrosfitz/10k_history_data_v4", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T17:43:53Z
--- tags: - unsloth - trl - sft - phi3 license: apache-2.0 datasets: - ambrosfitz/10k_history_data_v4 ---
ShirinYamani/huggyllama-llama-7b-finetuned
ShirinYamani
2024-06-20T22:53:58Z
0
0
transformers
[ "transformers", "safetensors", "peft", "lora", "dora", "text-generation", "dataset:timdettmers/openassistant-guanaco", "base_model:huggyllama/llama-7b", "base_model:adapter:huggyllama/llama-7b", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2024-06-18T19:04:06Z
--- base_model: huggyllama/llama-7b library_name: transformers license: mit tags: - peft - lora - dora model-index: - name: llama-3-8-fine-tuned-dora results: [] datasets: - timdettmers/openassistant-guanaco pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3-8-fine-tuned-dora ![huggyllama](https://miro.medium.com/v2/resize:fit:1358/0*UBaord-00Sm4asfW.png) This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on on [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset. For LoraConfig we set the `use_dora=True` for the Dora decomposition and comparison with Lora. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
aimlabs/0621-llama2-e
aimlabs
2024-06-20T22:52:42Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-20T22:39:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_1_13e6_base_x8
damgomz
2024-06-20T22:46:25Z
7
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:27:54Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 113719.51773810388 | | Emissions (Co2eq in kg) | 0.0688134351287268 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3425193625297809 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1184567341290415 | | Consumed energy (kWh) | 1.460976096658815 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.21891007164584997 | | Emissions (Co2eq in kg) | 0.04454014444742402 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_13e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.3e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.714698 | 0.180125 | | 1 | 0.343492 | 0.281244 | 0.927438 | | 2 | 0.239391 | 0.250671 | 0.910096 | | 3 | 0.192733 | 0.245173 | 0.917917 | | 4 | 0.154650 | 0.254767 | 0.921362 | | 5 | 0.120109 | 0.266124 | 0.915277 | | 6 | 0.094371 | 0.289996 | 0.920471 |
damgomz/ft_2_18e6_base_x4
damgomz
2024-06-20T22:45:46Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:19:55Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 113670.37732672691 | | Emissions (Co2eq in kg) | 0.0687836767063817 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3419389314817056 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1184053649532293 | | Consumed energy (kWh) | 1.4603442964349318 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2188154763539493 | | Emissions (Co2eq in kg) | 0.044520897786301375 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_18e6_base_x4 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.8e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.718642 | 0.303374 | | 1 | 0.403867 | 0.359116 | 0.734316 | | 2 | 0.342729 | 0.356861 | 0.724848 | | 3 | 0.321212 | 0.361382 | 0.904577 | | 4 | 0.308924 | 0.380484 | 0.871151 | | 5 | 0.283730 | 0.370478 | 0.886065 | | 6 | 0.260667 | 0.368334 | 0.906538 |
damgomz/ft_1_15e6_base_x8
damgomz
2024-06-20T22:40:58Z
7
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:22:25Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 113395.79560804369 | | Emissions (Co2eq in kg) | 0.068617558876779 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.338697821551724 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1181196319371464 | | Consumed energy (kWh) | 1.4568174534888658 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.21828690654548408 | | Emissions (Co2eq in kg) | 0.04441335327981711 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_15e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.5e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.716586 | 0.510773 | | 1 | 0.402656 | 0.370399 | 0.838166 | | 2 | 0.373072 | 0.359665 | 0.729723 | | 3 | 0.296739 | 0.281610 | 0.908057 | | 4 | 0.232505 | 0.277317 | 0.913647 | | 5 | 0.225349 | 0.338010 | 0.841930 | | 6 | 0.212031 | 0.290730 | 0.901609 |
whizzzzkid/test_sn9_1
whizzzzkid
2024-06-20T22:31:11Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T21:56:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/maldv_-_eleusis-7b-alpha-gguf
RichardErkhov
2024-06-20T22:21:45Z
9
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-20T19:53:54Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) eleusis-7b-alpha - GGUF - Model creator: https://huggingface.co/maldv/ - Original model: https://huggingface.co/maldv/eleusis-7b-alpha/ | Name | Quant method | Size | | ---- | ---- | ---- | | [eleusis-7b-alpha.Q2_K.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q2_K.gguf) | Q2_K | 2.53GB | | [eleusis-7b-alpha.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [eleusis-7b-alpha.IQ3_S.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.IQ3_S.gguf) | IQ3_S | 2.96GB | | [eleusis-7b-alpha.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [eleusis-7b-alpha.IQ3_M.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.IQ3_M.gguf) | IQ3_M | 3.06GB | | [eleusis-7b-alpha.Q3_K.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q3_K.gguf) | Q3_K | 3.28GB | | [eleusis-7b-alpha.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [eleusis-7b-alpha.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [eleusis-7b-alpha.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [eleusis-7b-alpha.Q4_0.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q4_0.gguf) | Q4_0 | 3.83GB | | [eleusis-7b-alpha.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [eleusis-7b-alpha.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [eleusis-7b-alpha.Q4_K.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q4_K.gguf) | Q4_K | 4.07GB | | [eleusis-7b-alpha.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [eleusis-7b-alpha.Q4_1.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q4_1.gguf) | Q4_1 | 4.24GB | | [eleusis-7b-alpha.Q5_0.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q5_0.gguf) | Q5_0 | 4.65GB | | [eleusis-7b-alpha.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [eleusis-7b-alpha.Q5_K.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q5_K.gguf) | Q5_K | 4.78GB | | [eleusis-7b-alpha.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [eleusis-7b-alpha.Q5_1.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q5_1.gguf) | Q5_1 | 5.07GB | | [eleusis-7b-alpha.Q6_K.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q6_K.gguf) | Q6_K | 5.53GB | | [eleusis-7b-alpha.Q8_0.gguf](https://huggingface.co/RichardErkhov/maldv_-_eleusis-7b-alpha-gguf/blob/main/eleusis-7b-alpha.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-nc-4.0 tags: - merge - conversational - multi-task pipeline_tag: text-generation --- # Eleusis 7B - α - "Red Team Assistant" Eleusis is Hermes' son... Get it? ## Groundwork A merge of a cluster of Hermes-related models, to see if we could get more informative and engaging responses. * OpenHermes-2.5-Mistral-7B and merged in # Inputs * West-Hermes-7B * Einstein-v4-7B * Prox-MistralHermes-7B * dolphin-2.8-experiment26-7b # Outputs * LaseredHermes-7B * Prox-MistralHermes-7B * Einstein-v4-7B * Noromaid-7B-0.4-DPO * West-Hermes-7B ### 9-partition merge All of the layers were partitioned in to 9 random bins. Alternating models were slerped at [1...0.5] (inputs), and [0.5...1] (outputs) gradients; except attention, which was slerped at 0.97 (with a drop rate of .28). I originally had it at less extreme gradiation, but it wasn't enough to lock in the special tokens. ### Other Includes fast tokenizer. ## Chat Template *from OpenHermes 2.5* OpenHermes 2.5 now uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts are now a thing that matters! Hermes 2.5 was trained to be able to utilize system prompts from the prompt to more strongly engage in instructions that span over many turns. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by a man named Teknium, who designed me to assist and support users with their needs and requests.<|im_end|> ``` ### Fun Prompts Try ``` <|im_start|>system You are a red team hacking assistant AI. Please use visual descriptions when interacting with the user.<|im_end|> <|im_start|>user {% Your Request %}<|im_end|> <|im_start|>assistant ```
juliamendelsohn/social-movement-framing-elements
juliamendelsohn
2024-06-20T22:15:53Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "endpoints_compatible", "region:us" ]
null
2024-06-19T21:39:58Z
RoBERTa model for detecting framing elements in tweets related to social movements. Multilabel classification of 7 categories: - problem identification (diagnostic core framing task) - blame attribution (diagnostic core framing task) - proposing solutions (prognostic core framing task) - discussing tactics (prognostic core framing task) - expressing solidarity / celebrating a movement (prognostic core framing task) - counterframing / challenging the opposite sides' arguments (prognostic core framing task) - motivational / calls to action (motivational core framing task) The model is trained on Twitter data from three issue areas: guns, LGBTQ rights, and immigration. All data is in English and written in 2018-2019. Paper: https://journalqd.org/article/view/5896
aimlabs/0621-solar-k
aimlabs
2024-06-20T22:14:01Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-20T22:02:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/getsujitsu-v1-sdxl
John6666
2024-06-20T22:03:49Z
1,465
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-20T21:59:02Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://civitai.com/models/527347/getsujitsuv10?modelVersionId=585924).
damgomz/ft_2_17e6_base_x2
damgomz
2024-06-20T21:57:29Z
7
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:17:23Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 110773.13899111748 | | Emissions (Co2eq in kg) | 0.067030494150441 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3077351685499117 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1153873585820193 | | Consumed energy (kWh) | 1.4231225271319294 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.21323829255790114 | | Emissions (Co2eq in kg) | 0.04338614610485434 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_17e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.7e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.707929 | 0.373699 | | 1 | 0.319338 | 0.294026 | 0.873274 | | 2 | 0.227911 | 0.232662 | 0.926162 | | 3 | 0.183436 | 0.233870 | 0.922216 | | 4 | 0.144061 | 0.244259 | 0.926748 | | 5 | 0.102836 | 0.271336 | 0.916719 | | 6 | 0.080464 | 0.315296 | 0.916698 |
Dimi-G/roberta-base-emotion
Dimi-G
2024-06-20T21:56:47Z
19
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "emotion", "pytorch", "en", "dataset:dair-ai/emotion", "arxiv:1910.09700", "arxiv:1907.11692", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-17T15:19:24Z
--- library_name: transformers tags: - emotion - text-classification - pytorch license: apache-2.0 datasets: - dair-ai/emotion language: - en metrics: - accuracy - f1 pipeline_tag: text-classification --- # RoBERTa for Emotion Classification ## Model Description This model is a fine-tuned version of `RoBERTaForSequenceClassification` trained to classify text into six emotion categories: Sadness, Joy, Love, Anger, Fear, and Surprise. - [RoBERTa](https://huggingface.co/docs/transformers/v4.41.3/en/model_doc/roberta#transformers.RobertaForSequenceClassification) - Special thanks to [bhadresh-savani](https://huggingface.co/bhadresh-savani/roberta-base-emotion), whose notebook was the main guide for this work. ## Intended Use The model is intended for classifying emotions in text data. It can be used in applications involving sentiment analysis, chatbots, social media monitoring, diary entries. ### Limitations - The model is trained on a specific emotion dataset and may not generalize well to other datasets or domains. - It might not perform well on text with mixed or ambiguous emotions. ## How to use the model ```python from transformers import pipeline classifier = pipeline(model="Dimi-G/roberta-base-emotion") emotions=classifier("i feel very happy and excited since i learned so many things", top_k=None) print(emotions) """ Output: [{'label': 'Joy', 'score': 0.9991986155509949}, {'label': 'Love', 'score': 0.0003064649645239115}, {'label': 'Sadness', 'score': 0.0001680034474702552}, {'label': 'Anger', 'score': 0.00012623333896044642}, {'label': 'Surprise', 'score': 0.00011396403715480119}, {'label': 'Fear', 'score': 8.671794785186648e-05}] """ ``` ## Training Details The model was trained on a randomized subset of the [dar-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion) dataset from the Hugging Face datasets library. Here are the training parameters: - **Batch size**: 64 - **Number of epochs**: 10 - **Learning rate**: 5e-5 - **Warmup steps**: 500 - **Weight decay**: 0.03 - **Evaluation strategy**: epoch - **Save strategy**: epoch - **Metric for best model**: F1 score ## Evaluation ```python {'eval_loss': 0.18195335566997528, 'eval_accuracy': 0.94, 'eval_f1': 0.9396676959491667, 'eval_runtime': 1.1646, 'eval_samples_per_second': 858.685, 'eval_steps_per_second': 13.739, 'epoch': 10.0} ``` ## Model Resources Link to the notebook with details on fine-tuning the model and our approach with other models for emotion classification: - **Repository:** [Beginners Guide to Emotion Classification](https://github.com/Dimi-G/Capstone_Project/blob/main/Beginners_guide_to_emotion_classification.ipynb) ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). ## Citation - [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://huggingface.co/papers/1907.11692)
MothersTongue/mother_tongue_model_v3
MothersTongue
2024-06-20T21:56:25Z
36
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "sn", "dataset:mother_tongue_complete_dataset", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-20T17:14:18Z
--- language: - sn license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mother_tongue_complete_dataset metrics: - wer model-index: - name: mother_tongue_model_v3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mother_tongue_dataset_complete type: mother_tongue_complete_dataset args: 'config: sn' metrics: - name: Wer type: wer value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mother_tongue_model_v3 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mother_tongue_dataset_complete dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Wer: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:--------:|:----:|:---------------:|:---:| | 0.0018 | 16.3934 | 500 | 0.0009 | 0.0 | | 0.0001 | 32.7869 | 1000 | 0.0001 | 0.0 | | 0.0 | 49.1803 | 1500 | 0.0001 | 0.0 | | 0.0 | 65.5738 | 2000 | 0.0000 | 0.0 | | 0.0 | 81.9672 | 2500 | 0.0000 | 0.0 | | 0.0 | 98.3607 | 3000 | 0.0000 | 0.0 | | 0.0 | 114.7541 | 3500 | 0.0000 | 0.0 | | 0.0 | 131.1475 | 4000 | 0.0000 | 0.0 | | 0.0 | 147.5410 | 4500 | 0.0000 | 0.0 | | 0.0 | 163.9344 | 5000 | 0.0000 | 0.0 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
damgomz/ft_1_18e6_base_x1
damgomz
2024-06-20T21:52:29Z
17
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:26:05Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 110473.56648373604 | | Emissions (Co2eq in kg) | 0.0668492233654004 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3041985967737115 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1150753754434486 | | Consumed energy (kWh) | 1.41927397221716 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.21266161548119186 | | Emissions (Co2eq in kg) | 0.04326881353946328 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_18e6_base_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.8e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.728490 | 0.215942 | | 1 | 0.534088 | 0.492206 | 0.658617 | | 2 | 0.453986 | 0.485691 | 0.518904 | | 3 | 0.432910 | 0.419078 | 0.587941 | | 4 | 0.335373 | 0.340582 | 0.895713 | | 5 | 0.289835 | 0.338109 | 0.903333 | | 6 | 0.253915 | 0.345775 | 0.738484 |
damgomz/ft_2_13e6_base_x8
damgomz
2024-06-20T21:50:59Z
7
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-20T16:49:31Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 110389.22730994225 | | Emissions (Co2eq in kg) | 0.066798195786377 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.3032030856816257 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1149875214472412 | | Consumed energy (kWh) | 1.4181906071288737 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.21249926257163881 | | Emissions (Co2eq in kg) | 0.04323578069639405 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_13e6_base_x8 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.3e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.721676 | 0.475052 | | 1 | 0.318469 | 0.248439 | 0.912002 | | 2 | 0.228454 | 0.237786 | 0.906760 | | 3 | 0.180330 | 0.232310 | 0.920927 | | 4 | 0.132077 | 0.256897 | 0.913617 | | 5 | 0.100098 | 0.270191 | 0.928010 | | 6 | 0.068715 | 0.343794 | 0.907559 |
alibayram/Doktor-Llama-3-8b.gguf
alibayram
2024-06-20T21:50:16Z
10
3
null
[ "gguf", "tr", "dataset:alibayram/doktorsitesi", "license:cc-by-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-20T21:41:16Z
--- license: cc-by-4.0 datasets: - alibayram/doktorsitesi language: - tr --- # Doktor-Llama-3-8b.gguf ## Genel Bakış **Doktor-Llama-3-8b.gguf**, **alibayram/Doktor-Llama-3-8b** modelinin GGUF formatına dönüştürülmüş versiyonudur. GGUF, model dosyalarının daha verimli ve hızlı bir şekilde işlenmesini sağlayan yeni bir formattır. Bu dönüşüm, modelin çeşitli platformlarda ve uygulamalarda daha kolay kullanılabilmesi için yapılmıştır. ## GGUF Nedir? GGUF, "Graphical GPU Unified Format" anlamına gelir ve özellikle büyük dil modellerinin daha verimli bir şekilde depolanması ve işlenmesi için geliştirilmiş bir formattır. Bu format, model dosyalarının boyutunu küçültürken performansı artırır, bu sayede daha hızlı ve verimli kullanım sağlar. ## Alıntı Bu modeli araştırmanızda veya uygulamanızda kullanırsanız, lütfen aşağıdaki gibi alıntı yapın: ``` @model{alibayram2024doktor-llama-3-8b-gguf, author = {Ali Bayram}, title = {Doktor-Llama-3-8b.gguf}, year = 2024, url = {https://huggingface.co/alibayram/Doktor-Llama-3-8b.gguf}, } ``` ## Lisans Bu model, Creative Commons Attribution Non Commercial 4.0 lisansı altında lisanslanmıştır. Modeli ticari amaçlar dışında kullanabilir, paylaşabilir ve uyarlayabilirsiniz, ancak uygun atıf yapmanız gerekmektedir. ## Katkılar Bu modele katkılar memnuniyetle karşılanır. Herhangi bir öneriniz, iyileştirmeniz veya ek veriniz varsa, lütfen model oluşturucusuyla iletişime geçin veya model deposunda bir pull request gönderin. ## İletişim Herhangi bir soru veya daha fazla bilgi için lütfen iletişime geçin: - **İsim:** Ali Bayram - **Email:** malibayram20@gmail.com ## Teşekkürler Bu modelin geliştirilmesinde katkıda bulunan tüm sağlık profesyonellerine ve veri sağlayıcılarına teşekkür ederiz.
RichardErkhov/FelixChao_-_Faraday-7B-gguf
RichardErkhov
2024-06-20T21:47:33Z
12
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-06-20T13:34:10Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Faraday-7B - GGUF - Model creator: https://huggingface.co/FelixChao/ - Original model: https://huggingface.co/FelixChao/Faraday-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Faraday-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [Faraday-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Faraday-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Faraday-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Faraday-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Faraday-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [Faraday-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Faraday-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Faraday-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Faraday-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [Faraday-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Faraday-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Faraday-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [Faraday-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Faraday-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [Faraday-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [Faraday-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Faraday-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [Faraday-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Faraday-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [Faraday-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [Faraday-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FelixChao_-_Faraday-7B-gguf/blob/main/Faraday-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
francepfl/mantis-8b-idefics2_drivelm_exp10_english_8192
francepfl
2024-06-20T21:44:26Z
5
0
transformers
[ "transformers", "safetensors", "idefics2", "image-text-to-text", "generated_from_trainer", "base_model:francepfl/DriveLM-mantis-8b-idefics2_8192-cot", "base_model:finetune:francepfl/DriveLM-mantis-8b-idefics2_8192-cot", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-06-20T21:01:21Z
--- base_model: francepfl/DriveLM-mantis-8b-idefics2_8192-cot tags: - generated_from_trainer model-index: - name: mantis-8b-idefics2_drivelm_exp10_english_8192 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mantis-8b-idefics2_drivelm_exp10_english_8192 This model is a fine-tuned version of [francepfl/DriveLM-mantis-8b-idefics2_8192-cot](https://huggingface.co/francepfl/DriveLM-mantis-8b-idefics2_8192-cot) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
google/xtr-base-en
google
2024-06-20T21:42:40Z
493
5
transformers
[ "transformers", "safetensors", "t5", "arxiv:2304.01982", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2024-04-29T15:35:49Z
--- license: apache-2.0 --- # XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval We provide how you can run [XTR](https://arxiv.org/abs/2304.01982) on PyTorch. We thank Mujeen Sung (https://github.com/mjeensung/xtr-pytorch) for providing this functionality. ## Installation ``` $ git clone git@github.com:mjeensung/xtr-pytorch.git $ pip install -e . ``` ## Usage ``` # Create the dataset sample_doc = "Google LLC (/ˈɡuːɡəl/ (listen)) is an American multinational technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence..." chunks = [chunk.lower() for chunk in sent_tokenize(sample_doc)] # Load the XTR retriever xtr = XtrRetriever(model_name_or_path="google/xtr-base-en", use_faiss=False, device="cuda") # Build the index xtr.build_index(chunks) # Retrieve top-3 documents given the query query = "Who founded google" retrieved_docs, metadata = xtr.retrieve_docs([query], document_top_k=3) for rank, (did, score, doc) in enumerate(retrieved_docs[0]): print(f"[{rank}] doc={did} ({score:.3f}): {doc}") """ >> [0] doc=0 (0.925): google llc (/ˈɡuːɡəl/ (listen)) is an american multinational technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, artificial intelligence, and consumer electronics. >> [1] doc=1 (0.903): it has been referred to as "the most powerful company in the world" and one of the world's most valuable brands due to its market dominance, data collection, and technological advantages in the area of artificial intelligence. >> [2] doc=2 (0.900): its parent company alphabet is considered one of the big five american information technology companies, alongside amazon, apple, meta, and microsoft. """ ``` ## Citing this work ```bibtex @article{lee2024rethinking, title={Rethinking the role of token retrieval in multi-vector retrieval}, author={Lee, Jinhyuk and Dai, Zhuyun and Duddu, Sai Meher Karthik and Lei, Tao and Naim, Iftekhar and Chang, Ming-Wei and Zhao, Vincent}, journal={Advances in Neural Information Processing Systems}, volume={36}, year={2024} } ```
damgomz/ft_2_18e6_base_x2
damgomz
2024-06-20T21:41:44Z
7
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-20T16:54:29Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 109827.93413639069 | | Emissions (Co2eq in kg) | 0.0664585372781499 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.2965765867951828 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1144027401437365 | | Consumed energy (kWh) | 1.4109793269389177 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2114187732125521 | | Emissions (Co2eq in kg) | 0.04301594087008635 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_18e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.8e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.717957 | 0.239483 | | 1 | 0.321873 | 0.276253 | 0.931059 | | 2 | 0.238292 | 0.271407 | 0.903014 | | 3 | 0.193588 | 0.251865 | 0.927781 | | 4 | 0.151241 | 0.283316 | 0.861165 | | 5 | 0.112451 | 0.321203 | 0.907193 | | 6 | 0.081003 | 0.341712 | 0.896467 |
RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf
RichardErkhov
2024-06-20T21:35:57Z
7
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-06-20T12:55:47Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) JSL-MedLlama-3-8B-v1.0 - GGUF - Model creator: https://huggingface.co/johnsnowlabs/ - Original model: https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v1.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [JSL-MedLlama-3-8B-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q2_K.gguf) | Q2_K | 2.96GB | | [JSL-MedLlama-3-8B-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [JSL-MedLlama-3-8B-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.IQ3_S.gguf) | IQ3_S | 3.43GB | | [JSL-MedLlama-3-8B-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [JSL-MedLlama-3-8B-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.IQ3_M.gguf) | IQ3_M | 3.52GB | | [JSL-MedLlama-3-8B-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q3_K.gguf) | Q3_K | 3.74GB | | [JSL-MedLlama-3-8B-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [JSL-MedLlama-3-8B-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [JSL-MedLlama-3-8B-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [JSL-MedLlama-3-8B-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q4_0.gguf) | Q4_0 | 4.34GB | | [JSL-MedLlama-3-8B-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [JSL-MedLlama-3-8B-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [JSL-MedLlama-3-8B-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q4_K.gguf) | Q4_K | 4.58GB | | [JSL-MedLlama-3-8B-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [JSL-MedLlama-3-8B-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q4_1.gguf) | Q4_1 | 4.78GB | | [JSL-MedLlama-3-8B-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q5_0.gguf) | Q5_0 | 5.21GB | | [JSL-MedLlama-3-8B-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [JSL-MedLlama-3-8B-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q5_K.gguf) | Q5_K | 5.34GB | | [JSL-MedLlama-3-8B-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [JSL-MedLlama-3-8B-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q5_1.gguf) | Q5_1 | 5.65GB | | [JSL-MedLlama-3-8B-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q6_K.gguf) | Q6_K | 6.14GB | | [JSL-MedLlama-3-8B-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-MedLlama-3-8B-v1.0-gguf/blob/main/JSL-MedLlama-3-8B-v1.0.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- tags: - llama-3-8b - sft - medical base_model: - meta-llama/Meta-Llama-3-8B license: cc-by-nc-nd-4.0 --- # JSL-MedLlama-3-8B-v1.0 [<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com) This model is developed by [John Snow Labs](https://www.johnsnowlabs.com/). This model is available under a [CC-BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en) license and must also conform to this [Acceptable Use Policy](https://huggingface.co/johnsnowlabs). If you need to license this model for commercial use, please contact us at info@johnsnowlabs.com. ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "johnsnowlabs/JSL-MedLlama-3-8B-v1.0" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 🏆 Evaluation | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------------------------|-------|------|-----:|--------|-----:|---|-----:| |stem |N/A |none | 0|acc |0.6217|± |0.0057| | | |none | 0|acc_norm|0.5847|± |0.0066| | - medmcqa |Yaml |none | 0|acc |0.5563|± |0.0077| | | |none | 0|acc_norm|0.5563|± |0.0077| | - medqa_4options |Yaml |none | 0|acc |0.6779|± |0.0131| | | |none | 0|acc_norm|0.6779|± |0.0131| | - anatomy (mmlu) | 0|none | 0|acc |0.6963|± |0.0397| | - clinical_knowledge (mmlu) | 0|none | 0|acc |0.7509|± |0.0266| | - college_biology (mmlu) | 0|none | 0|acc |0.7986|± |0.0335| | - college_medicine (mmlu) | 0|none | 0|acc |0.6590|± |0.0361| | - medical_genetics (mmlu) | 0|none | 0|acc |0.8500|± |0.0359| | - professional_medicine (mmlu)| 0|none | 0|acc |0.7868|± |0.0249| | - pubmedqa | 1|none | 0|acc |0.7380|± |0.0197| |Groups|Version|Filter|n-shot| Metric |Value | |Stderr| |------|-------|------|-----:|--------|-----:|---|-----:| |stem |N/A |none | 0|acc |0.6217|± |0.0057| | | |none | 0|acc_norm|0.5847|± |0.0066|
RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf
RichardErkhov
2024-06-20T21:34:50Z
8
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-20T13:41:11Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) A-I-0xtom-7B-slerp - GGUF - Model creator: https://huggingface.co/InnerI/ - Original model: https://huggingface.co/InnerI/A-I-0xtom-7B-slerp/ | Name | Quant method | Size | | ---- | ---- | ---- | | [A-I-0xtom-7B-slerp.Q2_K.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q2_K.gguf) | Q2_K | 2.53GB | | [A-I-0xtom-7B-slerp.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [A-I-0xtom-7B-slerp.IQ3_S.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.IQ3_S.gguf) | IQ3_S | 2.96GB | | [A-I-0xtom-7B-slerp.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [A-I-0xtom-7B-slerp.IQ3_M.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.06GB | | [A-I-0xtom-7B-slerp.Q3_K.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q3_K.gguf) | Q3_K | 3.28GB | | [A-I-0xtom-7B-slerp.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [A-I-0xtom-7B-slerp.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [A-I-0xtom-7B-slerp.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [A-I-0xtom-7B-slerp.Q4_0.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q4_0.gguf) | Q4_0 | 3.83GB | | [A-I-0xtom-7B-slerp.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [A-I-0xtom-7B-slerp.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [A-I-0xtom-7B-slerp.Q4_K.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q4_K.gguf) | Q4_K | 4.07GB | | [A-I-0xtom-7B-slerp.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [A-I-0xtom-7B-slerp.Q4_1.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q4_1.gguf) | Q4_1 | 4.24GB | | [A-I-0xtom-7B-slerp.Q5_0.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q5_0.gguf) | Q5_0 | 4.65GB | | [A-I-0xtom-7B-slerp.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [A-I-0xtom-7B-slerp.Q5_K.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q5_K.gguf) | Q5_K | 4.78GB | | [A-I-0xtom-7B-slerp.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [A-I-0xtom-7B-slerp.Q5_1.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q5_1.gguf) | Q5_1 | 5.07GB | | [A-I-0xtom-7B-slerp.Q6_K.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q6_K.gguf) | Q6_K | 5.53GB | | [A-I-0xtom-7B-slerp.Q8_0.gguf](https://huggingface.co/RichardErkhov/InnerI_-_A-I-0xtom-7B-slerp-gguf/blob/main/A-I-0xtom-7B-slerp.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 tags: - merge - mergekit - lazymergekit - 0x0dad0/nous_nous_v2_0 - tomaszki/nous-thirty base_model: - 0x0dad0/nous_nous_v2_0 - tomaszki/nous-thirty model-index: - name: A-I-0xtom-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 58.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 77.64 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 58.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.78 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 40.18 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InnerI/A-I-0xtom-7B-slerp name: Open LLM Leaderboard --- # A-I-0xtom-7B-slerp A-I-0xtom-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [0x0dad0/nous_nous_v2_0](https://huggingface.co/0x0dad0/nous_nous_v2_0) * [tomaszki/nous-thirty](https://huggingface.co/tomaszki/nous-thirty) # Avg model loss 0.3912096044793725 I used this testing script that loads your local model, pulls the latest data from cortex and calculates the loss: [avg loss script](https://gist.github.com/romanorac/59ccde7cbf07d8950ef9fb5b5db6a24e) ## 🧩 Configuration ```yaml slices: - sources: - model: 0x0dad0/nous_nous_v2_0 layer_range: [0, 32] - model: tomaszki/nous-thirty layer_range: [0, 32] merge_method: slerp base_model: 0x0dad0/nous_nous_v2_0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "InnerI/A-I-0xtom-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_InnerI__A-I-0xtom-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |60.46| |AI2 Reasoning Challenge (25-Shot)|58.19| |HellaSwag (10-Shot) |77.64| |MMLU (5-Shot) |58.74| |TruthfulQA (0-shot) |54.78| |Winogrande (5-shot) |73.24| |GSM8k (5-shot) |40.18|
zhaorui-nb/Phi-3-mini-4k-instruct._.lora_ft._.Setting1
zhaorui-nb
2024-06-20T21:33:27Z
5
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T21:20:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_2_16e6_base_x2
damgomz
2024-06-20T21:22:30Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-20T16:23:15Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 108679.76357269289 | | Emissions (Co2eq in kg) | 0.0657637579194956 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.2830216942648107 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1132067917438846 | | Consumed energy (kWh) | 1.3962284860086962 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2092085448774338 | | Emissions (Co2eq in kg) | 0.042566240732638046 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_16e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.6e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.702362 | 0.482805 | | 1 | 0.325414 | 0.265861 | 0.906338 | | 2 | 0.227947 | 0.279596 | 0.936554 | | 3 | 0.188395 | 0.252933 | 0.933670 | | 4 | 0.143702 | 0.272505 | 0.909145 | | 5 | 0.114262 | 0.251677 | 0.923586 | | 6 | 0.080045 | 0.311559 | 0.911560 |
kanishka/smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-1e-4
kanishka
2024-06-20T21:14:03Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "opt", "text-generation", "generated_from_trainer", "dataset:kanishka/counterfactual_babylm_measure_nps_as_singular_new", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-19T22:35:38Z
--- tags: - generated_from_trainer datasets: - kanishka/counterfactual_babylm_measure_nps_as_singular_new metrics: - accuracy model-index: - name: smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-1e-4 results: - task: name: Causal Language Modeling type: text-generation dataset: name: kanishka/counterfactual_babylm_measure_nps_as_singular_new type: kanishka/counterfactual_babylm_measure_nps_as_singular_new metrics: - name: Accuracy type: accuracy value: 0.40681131693060796 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-1e-4 This model was trained from scratch on the kanishka/counterfactual_babylm_measure_nps_as_singular_new dataset. It achieves the following results on the evaluation set: - Loss: 3.4240 - Accuracy: 0.4068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 4.0517 | 1.0 | 18602 | 4.2617 | 0.3086 | | 3.5614 | 2.0 | 37204 | 3.7325 | 0.3617 | | 3.3871 | 3.0 | 55806 | 3.5926 | 0.3794 | | 3.2873 | 4.0 | 74408 | 3.4903 | 0.3889 | | 3.2166 | 5.0 | 93010 | 3.4705 | 0.3930 | | 3.1683 | 6.0 | 111612 | 3.4386 | 0.3965 | | 3.122 | 7.0 | 130214 | 3.4230 | 0.3987 | | 3.0883 | 8.0 | 148816 | 3.4103 | 0.4020 | | 3.059 | 9.0 | 167418 | 3.4161 | 0.4022 | | 3.0294 | 10.0 | 186020 | 3.4004 | 0.4039 | | 3.0081 | 11.0 | 204622 | 3.4048 | 0.4041 | | 2.9849 | 12.0 | 223224 | 3.4068 | 0.4046 | | 2.9618 | 13.0 | 241826 | 3.4127 | 0.4048 | | 2.9398 | 14.0 | 260428 | 3.4079 | 0.4054 | | 2.9226 | 15.0 | 279030 | 3.3963 | 0.4065 | | 2.9009 | 16.0 | 297632 | 3.4036 | 0.4068 | | 2.8845 | 17.0 | 316234 | 3.4090 | 0.4067 | | 2.8685 | 18.0 | 334836 | 3.4054 | 0.4071 | | 2.8513 | 19.0 | 353438 | 3.4187 | 0.4069 | | 2.8368 | 20.0 | 372040 | 3.4240 | 0.4068 | ### Framework versions - Transformers 4.38.0 - Pytorch 2.3.1+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
magnifi/parser_user_v8-0620-epoch8-0.002_nosystemprompt
magnifi
2024-06-20T21:13:35Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T21:11:37Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rajavaid/t5-large_PREFIX_TUNING_SEQ2SEQ
rajavaid
2024-06-20T21:10:54Z
4
0
peft
[ "peft", "arxiv:1910.09700", "base_model:google-t5/t5-large", "base_model:adapter:google-t5/t5-large", "region:us" ]
null
2024-06-20T21:10:52Z
--- library_name: peft base_model: t5-large --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
damgomz/ft_2_10e6_base_x1
damgomz
2024-06-20T20:58:24Z
12
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:14:44Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 107243.61335873604 | | Emissions (Co2eq in kg) | 0.0648947002454125 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.2660668294476174 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1117107176408169 | | Consumed energy (kWh) | 1.377777547088439 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.2064439557155669 | | Emissions (Co2eq in kg) | 0.04200374856550495 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_10e6_base_x1 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.735206 | 0.283475 | | 1 | 0.293297 | 0.231790 | 0.926808 | | 2 | 0.212934 | 0.229867 | 0.905469 | | 3 | 0.171116 | 0.247606 | 0.901981 | | 4 | 0.126542 | 0.231386 | 0.923783 | | 5 | 0.088685 | 0.252406 | 0.915788 | | 6 | 0.057451 | 0.330286 | 0.911676 |
Isha1218/t5-small-finetuned-xsum
Isha1218
2024-06-20T20:57:11Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-20T17:21:07Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: validation args: default metrics: - name: Rouge1 type: rouge value: 28.2993 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4781 - Rouge1: 28.2993 - Rouge2: 7.7362 - Rougel: 22.2396 - Rougelsum: 22.2498 - Gen Len: 18.8252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7071 | 1.0 | 12753 | 2.4781 | 28.2993 | 7.7362 | 22.2396 | 22.2498 | 18.8252 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
MarkBW/polaroid-filmstyle-xl
MarkBW
2024-06-20T20:56:39Z
35
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-06-20T20:55:51Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/polaroidv2_00285_.jpeg base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: polaroid film photography --- # polaroid-filmstyle-xl <Gallery /> ## Model description A grainy polaroid film style :) By: pixelworld_ai ## Trigger words You should use `polaroid film photography` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/MarkBW/polaroid-filmstyle-xl/tree/main) them in the Files & versions tab.
varun-v-rao/bart-large-lora-2.36M-squad-model1
varun-v-rao
2024-06-20T20:50:48Z
21
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "question-answering", "generated_from_trainer", "dataset:varun-v-rao/squad", "base_model:facebook/bart-large", "base_model:finetune:facebook/bart-large", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-06-20T18:14:38Z
--- license: apache-2.0 base_model: facebook/bart-large tags: - generated_from_trainer datasets: - varun-v-rao/squad model-index: - name: bart-large-lora-2.36M-squad-model1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-lora-2.36M-squad-model1 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 50 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
mkay8/llama3-8b-instruct-Arabic-mental_health_QA
mkay8
2024-06-20T20:44:07Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T19:30:40Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_1_13e6_base_x2
damgomz
2024-06-20T20:18:45Z
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-19T16:15:18Z
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 104860.06448483469 | | Emissions (Co2eq in kg) | 0.0634524317167644 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.237928592727256 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1092281919856866 | | Consumed energy (kWh) | 1.3471567847129462 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.20185562413330674 | | Emissions (Co2eq in kg) | 0.04107019192322692 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_1_13e6_base_x2 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.3e-05 | | batch_size | 1 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.723915 | 0.309982 | | 1 | 0.340977 | 0.291564 | 0.920691 | | 2 | 0.244621 | 0.262026 | 0.924940 | | 3 | 0.194966 | 0.241931 | 0.927945 | | 4 | 0.143137 | 0.253461 | 0.918482 | | 5 | 0.099501 | 0.307317 | 0.913578 | | 6 | 0.063005 | 0.329634 | 0.922369 |