modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
Paresh1879/Img2Img-Controlnet-ComfyUI
|
Paresh1879
| 2024-06-23T02:36:37Z | 0 | 1 | null |
[
"img2img",
"ComfyUI",
"Controlnet",
"license:apache-2.0",
"region:us"
] | null | 2024-06-12T05:52:10Z |
---
license: apache-2.0
tags:
- img2img
- ComfyUI
- Controlnet
---
# Img2Img-Controlnet-ComfyUI

This repository contains the Img2Img project using Controlnet on ComfyUI. It focuses on two styles GTA and Anime.
 
## Workflow
1. **Input Image**: The process starts by passing the input image to the LineArt and OpenPose preprocessors.
2. **ControlNet**: The preprocessed images are fed into ControlNet.
3. **Efficient Loader**: ControlNet outputs are then passed to the Efficient Loader, which loads the weights.
4. **KSampler**: Finally, the loaded data is processed through KSampler to generate the output image.
## Also deployed on - [OpenArt AI](https://openart.ai/workflows/bongo_lame_87/img2img-comfyui-controlnet/7HBWH8HXMhzIfOx9w1LM)
## Load the model in ComfyUI - [Workflow-Model](https://huggingface.co/Paresh1879/Img2Img-Controlnet-ComfyUI/blob/main/comfyui_workflow.json)
## Prompts Used:
### GTA
1. **Positive Prompt** : In the style of Grand Theft Auto, loading screens, (palm trees), GTA style artwork, highly detailed, urban scene with numerous palm trees, neon lights, and graffiti, trending on ArtStation, preserving the individual's race, color and hair.
2. **Negative Prompt** : (worst quality, low quality = 1.3), drastic change in facial features
### Anime
1. **Positive Prompt** : In the style of classic anime, vibrant colors, large expressive eyes, highly detailed backgrounds, intricate character designs, dynamic poses, soft shading, fantasy or urban settings with cherry blossoms, traditional Japanese architecture, and bustling cityscapes, preserving the indvidual's race, color and hair.
2. **Negative Prompt** : (worst quality, low quality = 1.3), drastic change in facial features
## Installation
1. **Clone the Repository**:
```bash
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI
```
2. **Install Dependencies**:
3. **Install Nodes and Models**:
Copy the custom nodes and models listed to the respective directories in your ComfyUI installation.
## Custom Nodes
### Comfyroll Studio
- CR Aspect Ratio
- CR Multi-ControlNet Stack
### ComfyUI
- PreviewImage
- SaveImage
- LoadImage
### ComfyUI Nodes for Inference.Core
- CannyEdgePreprocessor
- OpenposePreprocessor
- LineArtPreprocessor
### Efficiency Nodes for ComfyUI Version 2.0+
- Efficient Loader
- XY Input: CFG Scale
- XY Plot
- KSampler (Efficient)
## Models - Checkpoint and VAE
- Checkpoint: [Dreamshaper](https://huggingface.co/Lykon/DreamShaper/blob/main/DreamShaper_8_pruned.safetensors) & [Realistic Vision](https://huggingface.co/numeraz/realisticvisionv60B1/blob/main/realisticVisionV60B1_v51VAE.safetensors)
- VAE: [SD VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt)
## KSampler Settings
The following settings were used in the KSampler (Efficient) node for ComfyUI:
- **Seed**: 4091745839
- **Steps**: 20
- **CFG**: 4.0
- **Sampler Name**: dpmpp_3m_sde_gpu
- **Scheduler**: karras
- **Denoise**: 1.00
- **Preview Method**: auto
- **VAE Decode**: true
These settings help in achieving efficient sampling while maintaining quality output in the ComfyUI framework.
## Docker
A Docker file is included for easy setup and deployment.
---
|
Ariffiq99/COPA_CRAB_xlm_roberta_large_finetuned
|
Ariffiq99
| 2024-06-23T02:36:05Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/CRAB_xlm_roberta_large_finetuned",
"base_model:finetune:Ariffiq99/CRAB_xlm_roberta_large_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-06-23T02:22:47Z |
---
license: mit
base_model: Ariffiq99/CRAB_xlm_roberta_large_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_CRAB_xlm_roberta_large_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_CRAB_xlm_roberta_large_finetuned
This model is a fine-tuned version of [Ariffiq99/CRAB_xlm_roberta_large_finetuned](https://huggingface.co/Ariffiq99/CRAB_xlm_roberta_large_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6929
- F1: 0.496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| No log | 1.0 | 63 | 0.6938 | 0.538 |
| No log | 2.0 | 126 | 0.6944 | 0.49 |
| No log | 3.0 | 189 | 0.6926 | 0.522 |
| No log | 4.0 | 252 | 0.6934 | 0.492 |
| No log | 5.0 | 315 | 0.6928 | 0.506 |
| No log | 6.0 | 378 | 0.6945 | 0.502 |
| No log | 7.0 | 441 | 0.6940 | 0.476 |
| 0.7077 | 8.0 | 504 | 0.6938 | 0.528 |
| 0.7077 | 9.0 | 567 | 0.6935 | 0.488 |
| 0.7077 | 10.0 | 630 | 0.6929 | 0.496 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ariffiq99/COPA_CRAB_albert_base_finetuned
|
Ariffiq99
| 2024-06-23T02:33:59Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/CRAB_albert_base_finetuned",
"base_model:finetune:Ariffiq99/CRAB_albert_base_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-06-23T02:32:02Z |
---
license: apache-2.0
base_model: Ariffiq99/CRAB_albert_base_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_CRAB_albert_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_CRAB_albert_base_finetuned
This model is a fine-tuned version of [Ariffiq99/CRAB_albert_base_finetuned](https://huggingface.co/Ariffiq99/CRAB_albert_base_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4567
- F1: 0.674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| No log | 1.0 | 63 | 0.6274 | 0.666 |
| No log | 2.0 | 126 | 0.5703 | 0.69 |
| No log | 3.0 | 189 | 0.6324 | 0.704 |
| No log | 4.0 | 252 | 0.7201 | 0.69 |
| No log | 5.0 | 315 | 1.0079 | 0.686 |
| No log | 6.0 | 378 | 1.1511 | 0.678 |
| No log | 7.0 | 441 | 1.2763 | 0.67 |
| 0.2791 | 8.0 | 504 | 1.3775 | 0.676 |
| 0.2791 | 9.0 | 567 | 1.4347 | 0.674 |
| 0.2791 | 10.0 | 630 | 1.4567 | 0.674 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/AceGPT-v1.5-13B-Chat-GGUF
|
mradermacher
| 2024-06-23T02:29:47Z | 21 | 0 |
transformers
|
[
"transformers",
"gguf",
"ar",
"zh",
"en",
"base_model:FreedomIntelligence/AceGPT-v1.5-13B-Chat",
"base_model:quantized:FreedomIntelligence/AceGPT-v1.5-13B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-23T01:42:12Z |
---
base_model: FreedomIntelligence/AceGPT-v1.5-13B-Chat
language:
- ar
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FreedomIntelligence/AceGPT-v1.5-13B-Chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ3_M.gguf) | IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q3_K_M.gguf) | Q3_K_M | 6.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q3_K_L.gguf) | Q3_K_L | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.IQ4_XS.gguf) | IQ4_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q4_K_S.gguf) | Q4_K_S | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q5_K_S.gguf) | Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q5_K_M.gguf) | Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q6_K.gguf) | Q6_K | 10.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AceGPT-v1.5-13B-Chat-GGUF/resolve/main/AceGPT-v1.5-13B-Chat.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ariffiq99/COPA_CRAB_Bert_Base_Uncased_Finetuned
|
Ariffiq99
| 2024-06-23T02:25:56Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/CRAB_bert_base_uncased_finetuned",
"base_model:finetune:Ariffiq99/CRAB_bert_base_uncased_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-06-23T02:22:26Z |
---
license: apache-2.0
base_model: Ariffiq99/CRAB_bert_base_uncased_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: COPA_CRAB_Bert_Base_Uncased_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# COPA_CRAB_Bert_Base_Uncased_Finetuned
This model is a fine-tuned version of [Ariffiq99/CRAB_bert_base_uncased_finetuned](https://huggingface.co/Ariffiq99/CRAB_bert_base_uncased_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6690
- F1: 0.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 63 | 0.6481 | 0.6345 |
| No log | 2.0 | 126 | 0.5997 | 0.6829 |
| No log | 3.0 | 189 | 0.5723 | 0.6944 |
| No log | 4.0 | 252 | 0.5751 | 0.6898 |
| No log | 5.0 | 315 | 0.5906 | 0.7149 |
| No log | 6.0 | 378 | 0.6036 | 0.7273 |
| No log | 7.0 | 441 | 0.6245 | 0.7280 |
| 0.4609 | 8.0 | 504 | 0.6476 | 0.7213 |
| 0.4609 | 9.0 | 567 | 0.6688 | 0.7181 |
| 0.4609 | 10.0 | 630 | 0.6690 | 0.7181 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
lashao/miewid-msv2-v3
|
lashao
| 2024-06-23T01:30:41Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"miewid",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-06-23T01:30:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rizwanaslam/educate-ai-v2
|
rizwanaslam
| 2024-06-23T01:30:00Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T00:16:55Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** rizwanaslam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF
|
mradermacher
| 2024-06-23T01:21:36Z | 35 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup",
"base_model:quantized:SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-23T00:27:19Z |
---
base_model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LLAMA-3_8B_Unaligned_Alpha_RP_Soup-GGUF/resolve/main/LLAMA-3_8B_Unaligned_Alpha_RP_Soup.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
booksouls/fasttext-goodreads-vectors
|
booksouls
| 2024-06-23T01:15:21Z | 4 | 0 |
fasttext
|
[
"fasttext",
"feature-extraction",
"en",
"dataset:booksouls/goodreads-book-descriptions",
"region:us"
] |
feature-extraction
| 2024-06-22T22:35:17Z |
---
datasets:
- booksouls/goodreads-book-descriptions
language:
- en
library_name: fasttext
pipeline_tag: feature-extraction
---
|
ChaoticNeutrals/Templar_v1_8B
|
ChaoticNeutrals
| 2024-06-23T00:58:01Z | 244 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:ChaoticNeutrals/T-900-8B",
"base_model:finetune:ChaoticNeutrals/T-900-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T00:05:13Z |
---
base_model:
- ChaoticNeutrals/T-900-8B
- ResplendentAI/Nymph_8B
license: apache-2.0
language:
- en
---
# Templar v1

A SLERP merge of T-900 and Nymph, Templar shows some emergent properties that I was not expecting to see.
This model is purpose made for roleplaying, and has seen a plethora of data. I assure you that it will serve that purpose very well.
|
phunganhsang/PhoBert_Lexical_Dataset59KBoDuoi
|
phunganhsang
| 2024-06-23T00:42:28Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"endpoints_compatible",
"region:us"
] | null | 2024-06-23T00:42:08Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PhoBert_Lexical_Dataset59KBoDuoi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PhoBert_Lexical_Dataset59KBoDuoi
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5309
- Accuracy: 0.9007
- F1: 0.9012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:|
| No log | 0.2558 | 200 | 0.3375 | 0.8526 | 0.8503 |
| No log | 0.5115 | 400 | 0.3284 | 0.8569 | 0.8596 |
| No log | 0.7673 | 600 | 0.2900 | 0.8734 | 0.8745 |
| 0.3414 | 1.0230 | 800 | 0.2884 | 0.8846 | 0.8851 |
| 0.3414 | 1.2788 | 1000 | 0.2856 | 0.8818 | 0.8830 |
| 0.3414 | 1.5345 | 1200 | 0.2902 | 0.8799 | 0.8811 |
| 0.3414 | 1.7903 | 1400 | 0.2621 | 0.8868 | 0.8871 |
| 0.2522 | 2.0460 | 1600 | 0.2861 | 0.8831 | 0.8847 |
| 0.2522 | 2.3018 | 1800 | 0.2749 | 0.8869 | 0.8877 |
| 0.2522 | 2.5575 | 2000 | 0.2704 | 0.8874 | 0.8884 |
| 0.2522 | 2.8133 | 2200 | 0.2676 | 0.8919 | 0.8921 |
| 0.2085 | 3.0691 | 2400 | 0.2889 | 0.8908 | 0.8916 |
| 0.2085 | 3.3248 | 2600 | 0.2731 | 0.8913 | 0.8911 |
| 0.2085 | 3.5806 | 2800 | 0.2812 | 0.8893 | 0.8908 |
| 0.2085 | 3.8363 | 3000 | 0.2970 | 0.8854 | 0.8871 |
| 0.1773 | 4.0921 | 3200 | 0.2802 | 0.8933 | 0.8945 |
| 0.1773 | 4.3478 | 3400 | 0.3058 | 0.8899 | 0.8909 |
| 0.1773 | 4.6036 | 3600 | 0.2812 | 0.8902 | 0.8915 |
| 0.1773 | 4.8593 | 3800 | 0.2884 | 0.8921 | 0.8934 |
| 0.1517 | 5.1151 | 4000 | 0.3009 | 0.8868 | 0.8883 |
| 0.1517 | 5.3708 | 4200 | 0.3231 | 0.8942 | 0.8948 |
| 0.1517 | 5.6266 | 4400 | 0.2762 | 0.8980 | 0.8986 |
| 0.1517 | 5.8824 | 4600 | 0.3059 | 0.8990 | 0.8994 |
| 0.1276 | 6.1381 | 4800 | 0.3180 | 0.8986 | 0.8993 |
| 0.1276 | 6.3939 | 5000 | 0.3295 | 0.8940 | 0.8950 |
| 0.1276 | 6.6496 | 5200 | 0.3083 | 0.8970 | 0.8977 |
| 0.1276 | 6.9054 | 5400 | 0.3209 | 0.8974 | 0.8978 |
| 0.108 | 7.1611 | 5600 | 0.3635 | 0.8900 | 0.8915 |
| 0.108 | 7.4169 | 5800 | 0.3582 | 0.8985 | 0.8986 |
| 0.108 | 7.6726 | 6000 | 0.3461 | 0.8981 | 0.8987 |
| 0.108 | 7.9284 | 6200 | 0.3579 | 0.8921 | 0.8931 |
| 0.0933 | 8.1841 | 6400 | 0.3858 | 0.8920 | 0.8933 |
| 0.0933 | 8.4399 | 6600 | 0.3891 | 0.8951 | 0.8956 |
| 0.0933 | 8.6957 | 6800 | 0.3677 | 0.8992 | 0.8992 |
| 0.0933 | 8.9514 | 7000 | 0.3938 | 0.8976 | 0.8982 |
| 0.0794 | 9.2072 | 7200 | 0.3902 | 0.8983 | 0.8986 |
| 0.0794 | 9.4629 | 7400 | 0.4381 | 0.8943 | 0.8954 |
| 0.0794 | 9.7187 | 7600 | 0.3928 | 0.8992 | 0.8998 |
| 0.0794 | 9.9744 | 7800 | 0.4024 | 0.8963 | 0.8970 |
| 0.0718 | 10.2302 | 8000 | 0.3989 | 0.8975 | 0.8981 |
| 0.0718 | 10.4859 | 8200 | 0.4059 | 0.9014 | 0.9010 |
| 0.0718 | 10.7417 | 8400 | 0.4263 | 0.8979 | 0.8986 |
| 0.0614 | 10.9974 | 8600 | 0.4150 | 0.8987 | 0.8992 |
| 0.0614 | 11.2532 | 8800 | 0.4828 | 0.8950 | 0.8959 |
| 0.0614 | 11.5090 | 9000 | 0.4294 | 0.8979 | 0.8983 |
| 0.0614 | 11.7647 | 9200 | 0.4490 | 0.8944 | 0.8955 |
| 0.0565 | 12.0205 | 9400 | 0.4235 | 0.8962 | 0.8967 |
| 0.0565 | 12.2762 | 9600 | 0.4713 | 0.8972 | 0.8979 |
| 0.0565 | 12.5320 | 9800 | 0.4682 | 0.8997 | 0.9001 |
| 0.0565 | 12.7877 | 10000 | 0.4638 | 0.8995 | 0.9002 |
| 0.052 | 13.0435 | 10200 | 0.4387 | 0.8974 | 0.8980 |
| 0.052 | 13.2992 | 10400 | 0.4574 | 0.9000 | 0.9004 |
| 0.052 | 13.5550 | 10600 | 0.4669 | 0.8990 | 0.8994 |
| 0.052 | 13.8107 | 10800 | 0.4747 | 0.8954 | 0.8964 |
| 0.0458 | 14.0665 | 11000 | 0.4753 | 0.8988 | 0.8995 |
| 0.0458 | 14.3223 | 11200 | 0.4989 | 0.8977 | 0.8982 |
| 0.0458 | 14.5780 | 11400 | 0.4924 | 0.8981 | 0.8987 |
| 0.0458 | 14.8338 | 11600 | 0.5108 | 0.9000 | 0.9005 |
| 0.0419 | 15.0895 | 11800 | 0.4892 | 0.9000 | 0.9004 |
| 0.0419 | 15.3453 | 12000 | 0.5124 | 0.9000 | 0.9005 |
| 0.0419 | 15.6010 | 12200 | 0.5102 | 0.8997 | 0.9003 |
| 0.0419 | 15.8568 | 12400 | 0.5056 | 0.8992 | 0.8997 |
| 0.0374 | 16.1125 | 12600 | 0.4842 | 0.8995 | 0.8996 |
| 0.0374 | 16.3683 | 12800 | 0.5275 | 0.8979 | 0.8987 |
| 0.0374 | 16.6240 | 13000 | 0.5248 | 0.8975 | 0.8984 |
| 0.0374 | 16.8798 | 13200 | 0.5312 | 0.8996 | 0.9004 |
| 0.0341 | 17.1355 | 13400 | 0.5086 | 0.9014 | 0.9018 |
| 0.0341 | 17.3913 | 13600 | 0.5261 | 0.8990 | 0.8996 |
| 0.0341 | 17.6471 | 13800 | 0.5242 | 0.8988 | 0.8990 |
| 0.0341 | 17.9028 | 14000 | 0.5340 | 0.8992 | 0.8998 |
| 0.0319 | 18.1586 | 14200 | 0.5314 | 0.8995 | 0.8998 |
| 0.0319 | 18.4143 | 14400 | 0.5287 | 0.9005 | 0.9007 |
| 0.0319 | 18.6701 | 14600 | 0.5353 | 0.9007 | 0.9012 |
| 0.0319 | 18.9258 | 14800 | 0.5287 | 0.9017 | 0.9021 |
| 0.0305 | 19.1816 | 15000 | 0.5307 | 0.9017 | 0.9021 |
| 0.0305 | 19.4373 | 15200 | 0.5299 | 0.9009 | 0.9014 |
| 0.0305 | 19.6931 | 15400 | 0.5315 | 0.9005 | 0.9010 |
| 0.0305 | 19.9488 | 15600 | 0.5309 | 0.9007 | 0.9012 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
subhuatharva/swim-224-base-satellite-image-classification
|
subhuatharva
| 2024-06-23T00:36:01Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"image-classification",
"arxiv:2111.1472",
"arxiv:2103.14030",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-22T20:43:42Z |
---
metrics:
- roc_auc
library_name: transformers
pipeline_tag: image-classification
---
## Model Details
- **Model Type**: Image classification / feature backbone
#### Model Stats:
- **Params (M)**: 71.1
- **GMACs**: 13.7
- **Activations (M)**: 48.3
- **Image size**: 224 x 224
## Papers:
- **AutoFormerV2**: https://arxiv.org/abs/2111.1472
- **Swin Transformer**: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030
- **Original**: https://github.com/microsoft/Cream/tree/main/AutoFormerV2
## how to load the model
```python
import joblib
from huggingface_hub import hf_hub_download
import safetensors
import torch
REPO_ID = "subhuatharva/swim-224-base-satellite-image-classification"
FILENAME = "model.safetensors"
# Download the model file
model_path = hf_hub_download(repo_id=REPO_ID, filename=FILENAME)
# intialize the model
model = create_model(
model_name,
num_classes=17
)
load_model(model, model_path)
```
|
antoniow12/speecht5_tts_mongolian
|
antoniow12
| 2024-06-22T23:59:01Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-06-22T19:29:29Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: speecht5_tts_mongolian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts_mongolian
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5093 | 16.2602 | 1000 | 0.4649 |
| 0.468 | 32.5203 | 2000 | 0.4519 |
| 0.4615 | 48.7805 | 3000 | 0.4510 |
| 0.464 | 65.0407 | 4000 | 0.4544 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Stephanie-S/gpt2_medium
|
Stephanie-S
| 2024-06-22T23:52:05Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-21T20:17:30Z |
---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2_medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_medium
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1775
- Accuracy: 0.9528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2182 | 1.0 | 1250 | 0.2023 | 0.9366 |
| 0.1332 | 2.0 | 2500 | 0.1775 | 0.9528 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hgissbkh/ALMA-13B-LoRA-SFT-xCOMET-QE-Multi
|
hgissbkh
| 2024-06-22T23:48:30Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-18T15:16:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hgissbkh/ALMA-13B-LoRA-CPO-CometKiwi-Multi
|
hgissbkh
| 2024-06-22T23:47:22Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T11:40:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Fischerboot/L3-Sophie-16r
|
Fischerboot
| 2024-06-22T23:40:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge",
"base_model:merge:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge",
"base_model:Fischerboot/sophie-16r",
"base_model:merge:Fischerboot/sophie-16r",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T23:32:51Z |
---
base_model:
- Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
- Fischerboot/sophie-16r
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge) + [Fischerboot/sophie-16r](https://huggingface.co/Fischerboot/sophie-16r)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge+Fischerboot/sophie-16r
merge_method: passthrough
dtype: bfloat16
```
|
ostoveland/test7
|
ostoveland
| 2024-06-22T23:37:03Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:400",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-22T23:36:33Z |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:400
- loss:TripletLoss
widget:
- source_sentence: 'query: Ny duk til markise på verandaen.'
sentences:
- 'query: Boring og sprenging fjell'
- 'query: Solskjerming Duette gardiner'
- 'query: Bygge ark'
- source_sentence: 'query: Montering av kjøkken.'
sentences:
- 'query: Skaffe og montere Ikea-kjøkkenskap på vegg som trenger forsterkning'
- 'query: Ladestolpe til sameie'
- 'query: Sette opp ny baderoms innredning'
- source_sentence: 'query: Blikkenslager'
sentences:
- 'query: Drenering av enebolig med ca 125m2 grunnflate'
- 'query: Blikkenslager til mindre taklekkasje i overgang takstein og ventilasjonskanal/pipe'
- 'query: Bytte av glass'
- source_sentence: 'query: Montere Ikea kjøkken.'
sentences:
- 'query: Montering av lite epoq kjøkken'
- 'query: Audi 1999 - A6, 0 km - Oljeskift'
- 'query: Legging av vinyl på baderomsgulv'
- source_sentence: 'query: Bygging av platting'
sentences:
- 'query: Fasadevask - Når som helst'
- 'query: Terrasse'
- 'query: Sette inn takvinduer + vinduer i stuen.'
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: triplet
name: Triplet
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 0.78
name: Cosine Accuracy
- type: dot_accuracy
value: 0.28
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.79
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.78
name: Euclidean Accuracy
- type: max_accuracy
value: 0.79
name: Max Accuracy
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ostoveland/test7")
# Run inference
sentences = [
'query: Bygging av platting',
'query: Terrasse',
'query: Fasadevask - Når som helst',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:---------|
| cosine_accuracy | 0.78 |
| dot_accuracy | 0.28 |
| manhattan_accuracy | 0.79 |
| euclidean_accuracy | 0.78 |
| **max_accuracy** | **0.79** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 400 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 13.02 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.3 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.54 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------|:--------------------------------------------------------|
| <code>query: Bytte av kledning på hus</code> | <code>query: utskifting av kledning.</code> | <code>query: Innsetting av vedovn Dovre varm 3</code> |
| <code>query: Bytte gammel sirkulasjonspumpe til radiatorer borettslag Oslo</code> | <code>query: Sjekk av Upoterm anlegg for vannbåren gulvvarme</code> | <code>query: Nytt gulv</code> |
| <code>query: Renovere gammel grusvei</code> | <code>query: Klippe hekk.</code> | <code>query: Mure ringmur/grunnmur og støpe såle</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | max_accuracy |
|:-----:|:----:|:------------:|
| 1.0 | 25 | 0.79 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf
|
RichardErkhov
| 2024-06-22T23:33:52Z | 13 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T23:25:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-Chat-v0.4 - GGUF
- Model creator: https://huggingface.co/TinyLlama/
- Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-Chat-v0.4.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-Chat-v0.4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-Chat-v0.4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-Chat-v0.4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-Chat-v0.4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-Chat-v0.4.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-Chat-v0.4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-Chat-v0.4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-Chat-v0.4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-Chat-v0.4.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-Chat-v0.4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-Chat-v0.4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-Chat-v0.4.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-Chat-v0.4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-Chat-v0.4.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-Chat-v0.4.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-Chat-v0.4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-Chat-v0.4.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-Chat-v0.4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-Chat-v0.4.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-Chat-v0.4.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLlama-1.1B-Chat-v0.4.Q8_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.4-gguf/blob/main/TinyLlama-1.1B-Chat-v0.4.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T).
The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format.
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-Chat-v0.4"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
CHAT_EOS_TOKEN_ID = 32002
prompt = "How to get in a good university?"
formatted_prompt = (
f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
)
sequences = pipeline(
formatted_prompt,
do_sample=True,
top_k=50,
top_p = 0.9,
num_return_sequences=1,
repetition_penalty=1.1,
max_new_tokens=1024,
eos_token_id=CHAT_EOS_TOKEN_ID,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf
|
RichardErkhov
| 2024-06-22T23:33:45Z | 27 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T23:25:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-Chat-v0.1 - GGUF
- Model creator: https://huggingface.co/TinyLlama/
- Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-Chat-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-Chat-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-Chat-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-Chat-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-Chat-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-Chat-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-Chat-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-Chat-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-Chat-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-Chat-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-Chat-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-Chat-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-Chat-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-Chat-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-Chat-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-Chat-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-Chat-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-Chat-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-Chat-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-Chat-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-Chat-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLlama-1.1B-Chat-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.1-gguf/blob/main/TinyLlama-1.1B-Chat-v0.1.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- timdettmers/openassistant-guanaco
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b). The dataset used is [openassistant-guananco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-Chat-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "What are the values in open source projects?"
formatted_prompt = (
f"### Human: {prompt}### Assistant:"
)
sequences = pipeline(
formatted_prompt,
do_sample=True,
top_k=50,
top_p = 0.7,
num_return_sequences=1,
repetition_penalty=1.1,
max_new_tokens=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
Fischerboot/L3-Sophie-8r
|
Fischerboot
| 2024-06-22T23:31:20Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge",
"base_model:merge:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge",
"base_model:Fischerboot/sophie-8r",
"base_model:merge:Fischerboot/sophie-8r",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T23:23:12Z |
---
base_model:
- Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
- Fischerboot/sophie-8r
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge) + [Fischerboot/sophie-8r](https://huggingface.co/Fischerboot/sophie-8r)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge+Fischerboot/sophie-8r
merge_method: passthrough
dtype: bfloat16
```
|
martintomov/Codestral-22B-v0.1-Q4_K_M-GGUF
|
martintomov
| 2024-06-22T23:30:13Z | 5 | 0 | null |
[
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:other",
"region:us"
] | null | 2024-06-22T23:29:17Z |
---
base_model: mistralai/Codestral-22B-v0.1
language:
- code
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
- llama-cpp
- gguf-my-repo
inference: false
---
# martintmv/Codestral-22B-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Codestral-22B-v0.1`](https://huggingface.co/mistralai/Codestral-22B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Codestral-22B-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo martintmv/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo martintmv/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo martintmv/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo martintmv/Codestral-22B-v0.1-Q4_K_M-GGUF --hf-file codestral-22b-v0.1-q4_k_m.gguf -c 2048
```
|
mradermacher/Llama3-70B-RAG-GGUF
|
mradermacher
| 2024-06-22T23:26:49Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:WendyHoang/Llama3-70B-RAG",
"base_model:quantized:WendyHoang/Llama3-70B-RAG",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T19:15:22Z |
---
base_model: WendyHoang/Llama3-70B-RAG
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/WendyHoang/Llama3-70B-RAG
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-70B-RAG-GGUF/resolve/main/Llama3-70B-RAG.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf
|
RichardErkhov
| 2024-06-22T23:25:04Z | 26 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T19:06:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-intermediate-step-1195k-token-2.5T - GGUF
- Model creator: https://huggingface.co/TinyLlama/
- Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q8_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf
|
RichardErkhov
| 2024-06-22T23:23:38Z | 17 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T19:05:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tiny-Vicuna-1B - GGUF
- Model creator: https://huggingface.co/Jiayi-Pan/
- Original model: https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Tiny-Vicuna-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q2_K.gguf) | Q2_K | 0.4GB |
| [Tiny-Vicuna-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [Tiny-Vicuna-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [Tiny-Vicuna-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [Tiny-Vicuna-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [Tiny-Vicuna-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q3_K.gguf) | Q3_K | 0.51GB |
| [Tiny-Vicuna-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [Tiny-Vicuna-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [Tiny-Vicuna-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [Tiny-Vicuna-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q4_0.gguf) | Q4_0 | 0.59GB |
| [Tiny-Vicuna-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [Tiny-Vicuna-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [Tiny-Vicuna-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q4_K.gguf) | Q4_K | 0.62GB |
| [Tiny-Vicuna-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [Tiny-Vicuna-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q4_1.gguf) | Q4_1 | 0.65GB |
| [Tiny-Vicuna-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q5_0.gguf) | Q5_0 | 0.71GB |
| [Tiny-Vicuna-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [Tiny-Vicuna-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q5_K.gguf) | Q5_K | 0.73GB |
| [Tiny-Vicuna-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [Tiny-Vicuna-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q5_1.gguf) | Q5_1 | 0.77GB |
| [Tiny-Vicuna-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q6_K.gguf) | Q6_K | 0.84GB |
| [Tiny-Vicuna-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Jiayi-Pan_-_Tiny-Vicuna-1B-gguf/blob/main/Tiny-Vicuna-1B.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
language:
- en
license: apache-2.0
model-index:
- name: Tiny-Vicuna-1B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 33.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 55.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 33.82
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Jiayi-Pan/Tiny-Vicuna-1B
name: Open LLM Leaderboard
---
# Tiny Vicuna 1B
This model is a fine-tuned version of [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T) on [WizardVicuna Dataset](https://github.com/melodysdreamj/WizardVicunaLM).
It should be fully compatible with Vicuna-v1.5 series.
This model is easy to iterate on for early experiments!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Jiayi-Pan__Tiny-Vicuna-1B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |34.76|
|AI2 Reasoning Challenge (25-Shot)|33.45|
|HellaSwag (10-Shot) |55.92|
|MMLU (5-Shot) |25.45|
|TruthfulQA (0-shot) |33.82|
|Winogrande (5-shot) |58.41|
|GSM8k (5-shot) | 1.52|
|
RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf
|
RichardErkhov
| 2024-06-22T23:23:34Z | 122 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T19:05:24Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
UNfilteredAI-1B - GGUF
- Model creator: https://huggingface.co/UnfilteredAI/
- Original model: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [UNfilteredAI-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q2_K.gguf) | Q2_K | 0.39GB |
| [UNfilteredAI-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ3_XS.gguf) | IQ3_XS | 0.43GB |
| [UNfilteredAI-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ3_S.gguf) | IQ3_S | 0.45GB |
| [UNfilteredAI-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K_S.gguf) | Q3_K_S | 0.45GB |
| [UNfilteredAI-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ3_M.gguf) | IQ3_M | 0.46GB |
| [UNfilteredAI-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K.gguf) | Q3_K | 0.49GB |
| [UNfilteredAI-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K_M.gguf) | Q3_K_M | 0.49GB |
| [UNfilteredAI-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K_L.gguf) | Q3_K_L | 0.53GB |
| [UNfilteredAI-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ4_XS.gguf) | IQ4_XS | 0.55GB |
| [UNfilteredAI-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_0.gguf) | Q4_0 | 0.57GB |
| [UNfilteredAI-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ4_NL.gguf) | IQ4_NL | 0.57GB |
| [UNfilteredAI-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_K_S.gguf) | Q4_K_S | 0.57GB |
| [UNfilteredAI-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_K.gguf) | Q4_K | 0.6GB |
| [UNfilteredAI-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_K_M.gguf) | Q4_K_M | 0.6GB |
| [UNfilteredAI-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_1.gguf) | Q4_1 | 0.63GB |
| [UNfilteredAI-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_0.gguf) | Q5_0 | 0.69GB |
| [UNfilteredAI-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_K_S.gguf) | Q5_K_S | 0.69GB |
| [UNfilteredAI-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_K.gguf) | Q5_K | 0.7GB |
| [UNfilteredAI-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_K_M.gguf) | Q5_K_M | 0.7GB |
| [UNfilteredAI-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_1.gguf) | Q5_1 | 0.74GB |
| [UNfilteredAI-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q6_K.gguf) | Q6_K | 0.81GB |
| [UNfilteredAI-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q8_0.gguf) | Q8_0 | 1.05GB |
Original model description:
---
license: other
language:
- en
tags:
- UnfilteredAI
---
# UNfilteredAI-1B
**Model Name**: UNfilteredAI-1B
**Model Type**: Text Generation
## About the Model
The UNfilteredAI-1B model is a large-scale text generation model developed by UnfilteredAI. This model is designed to push the boundaries of creativity and innovation in AI-generated content, without the constraints of traditional content moderation or filtering.
## Key Features
- **Uncensored and Unrestricted**: The UNfilteredAI-1B model is specifically engineered to generate text without any content restrictions or limitations. This allows for the exploration of a wide range of topics and styles, including potentially controversial or sensitive subject matter.
- **Extensive Training**: The model has been trained on a vast corpus of diverse textual data, enabling it to generate highly coherent and contextually relevant content across a broad range of domains.
- **Versatile Applications**: The UNfilteredAI-1B model can be utilized for a variety of text-based tasks, such as creative writing, conversational AI, and even educational or research-oriented applications.
## Intended Use
The UNfilteredAI-1B model is intended for use by experienced and responsible AI researchers, developers, and enthusiasts who are interested in pushing the boundaries of language generation and exploring the potential of uncensored AI technologies.
## Limitations and Ethical Considerations
- **Potential for Misuse**: The uncensored nature of the UNfilteredAI-1B model means that it could be used to generate harmful, unethical, or illegal content. Users should exercise caution and responsibility when utilizing this model.
- **Bias and Inconsistency**: As with many large language models, the UNfilteredAI-1B model may exhibit biases and inconsistencies in its outputs, which could lead to the generation of inaccurate, inappropriate, or even offensive content.
- **Sensitive Content**: The model is capable of generating explicit, adult-oriented, or otherwise sensitive content. Users should be aware of the potential risks and ensure that the model is used in an appropriate and ethical manner.
UnfilteredAI acknowledges the significant ethical considerations and potential risks associated with the development and deployment of uncensored AI models. We encourage users to engage with this model responsibly and to be mindful of the potential impact of their actions.
|
powermove72/SharkOgno2-9b-Passthrough
|
powermove72
| 2024-06-22T22:58:18Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Shark-1",
"eren23/OGNO-7b-dpo-truthful",
"conversational",
"custom_code",
"base_model:eren23/OGNO-7b-dpo-truthful",
"base_model:merge:eren23/OGNO-7b-dpo-truthful",
"base_model:powermove72/Shark-1",
"base_model:merge:powermove72/Shark-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T22:48:07Z |
---
base_model:
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
---
# SharkOgno2-9b-Passthrough
SharkOgno2-9b-Passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [powermove72/Shark-1](https://huggingface.co/powermove72/Shark-1)
* [eren23/OGNO-7b-dpo-truthful](https://huggingface.co/eren23/OGNO-7b-dpo-truthful)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: powermove72/Shark-1
layer_range: [8, 16]
- sources:
- model: eren23/OGNO-7b-dpo-truthful
layer_range: [0, 32]
merge_method: passthrough
tokenizer_source: union
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "powermove72/SharkOgno2-9b-Passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ostoveland/test6
|
ostoveland
| 2024-06-22T22:43:10Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-22T22:42:40Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ostoveland/test6
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ostoveland/test6')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ostoveland/test6')
model = AutoModel.from_pretrained('ostoveland/test6')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ostoveland/test6)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1500 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 1}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
surya-narayanan/biology
|
surya-narayanan
| 2024-06-22T22:37:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-11T04:40:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
willgrobots/checkpointsaved
|
willgrobots
| 2024-06-22T22:35:11Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"moondream1",
"text-generation",
"image-text-to-text",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-06-22T22:30:22Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
---
moondream2 is a small vision language model designed to run efficiently on edge devices. Check out the [GitHub repository](https://github.com/vikhyat/moondream) for details, or try it out on the [Hugging Face Space](https://huggingface.co/spaces/vikhyatk/moondream2)!
**Benchmarks**
| Release | VQAv2 | GQA | TextVQA | TallyQA (simple) | TallyQA (full) |
| --- | --- | --- | --- | --- | --- |
| 2024-03-04 | 74.2 | 58.5 | 36.4 | - | - |
| 2024-03-06 | 75.4 | 59.8 | 43.1 | 79.5 | 73.2 |
| 2024-03-13 | 76.8 | 60.6 | 46.4 | 79.6 | 73.3 |
| 2024-04-02 | 77.7 | 61.7 | 49.7 | 80.1 | 74.2 |
| 2024-05-08 | 79.0 | 62.7 | 53.1 | 81.6 | 76.1 |
| **2024-05-20** (latest) | 79.4 | 63.1 | 57.2 | 82.1 | 76.6 |
**Usage**
```bash
pip install transformers einops
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model_id = "vikhyatk/moondream2"
revision = "2024-05-20"
model = AutoModelForCausalLM.from_pretrained(
model_id, trust_remote_code=True, revision=revision
)
tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)
image = Image.open('<IMAGE_PATH>')
enc_image = model.encode_image(image)
print(model.answer_question(enc_image, "Describe this image.", tokenizer))
```
The model is updated regularly, so we recommend pinning the model version to a
specific release as shown above.
|
mlabonne/NeuralPipe-7B-ties
|
mlabonne
| 2024-06-22T22:33:07Z | 57 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-27T19:46:38Z |
---
license: apache-2.0
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
tags:
- merge
model-index:
- name: NeuralPipe-7B-ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.37
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-ties
name: Open LLM Leaderboard
---
# NeuralPipe-7B-ties
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## ⚡ Quantized models
Thanks to TheBloke for the quantized models:
* **GGUF**: https://huggingface.co/TheBloke/NeuralPipe-7B-ties-GGUF
* **AWQ**: https://huggingface.co/TheBloke/NeuralPipe-7B-ties-AWQ
* **GPTQ**: https://huggingface.co/TheBloke/NeuralPipe-7B-ties-GPTQ
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: OpenPipe/mistral-ft-optimized-1218
parameters:
density: 0.5
weight: 0.5
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralPipe-7B-ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.55|
|AI2 Reasoning Challenge (25-Shot)|67.92|
|HellaSwag (10-Shot) |86.04|
|MMLU (5-Shot) |64.24|
|TruthfulQA (0-shot) |61.37|
|Winogrande (5-shot) |80.19|
|GSM8k (5-shot) |69.52|
|
C-Ilyas/whisper-base-darija
|
C-Ilyas
| 2024-06-22T22:27:00Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-22T22:26:45Z |
---
language:
- ar
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
model-index:
- name: Whisper-Base-Darija
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Base-Darija
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Algerian Darija Dialect dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.0852
- eval_wer: 243.5823
- eval_runtime: 210.226
- eval_samples_per_second: 0.376
- eval_steps_per_second: 0.376
- epoch: 100.0
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 3000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
myrulezzzz/mistral_instructq8
|
myrulezzzz
| 2024-06-22T22:22:32Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:myrulezzzz/mistral_custom16bit",
"base_model:quantized:myrulezzzz/mistral_custom16bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T22:19:50Z |
---
base_model: myrulezzzz/mistral_custom16bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** myrulezzzz
- **License:** apache-2.0
- **Finetuned from model :** myrulezzzz/mistral_custom16bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
powermove72/SharkOgno2-7b-Passthrough
|
powermove72
| 2024-06-22T22:16:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Shark-1",
"eren23/OGNO-7b-dpo-truthful",
"base_model:eren23/OGNO-7b-dpo-truthful",
"base_model:merge:eren23/OGNO-7b-dpo-truthful",
"base_model:powermove72/Shark-1",
"base_model:merge:powermove72/Shark-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T22:11:55Z |
---
base_model:
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
---
# SharkOgno2-7b-Passthrough
SharkOgno2-7b-Passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [powermove72/Shark-1](https://huggingface.co/powermove72/Shark-1)
* [eren23/OGNO-7b-dpo-truthful](https://huggingface.co/eren23/OGNO-7b-dpo-truthful)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: powermove72/Shark-1
layer_range: [0, 8]
- sources:
- model: eren23/OGNO-7b-dpo-truthful
layer_range: [8, 32]
merge_method: passthrough
tokenizer_source: union
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "powermove72/SharkOgno2-7b-Passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ostoveland/test5
|
ostoveland
| 2024-06-22T22:13:07Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:24000",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-22T22:12:36Z |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:24000
- loss:TripletLoss
widget:
- source_sentence: 'query: Spesialtilpasset bokhylle'
sentences:
- 'query: Snekring av hyller og kontorpult'
- 'query: Påbygg Enebolig'
- 'query: Nye takrenner'
- source_sentence: 'query: * Fortsatt ledig: Bytte drenering-regnvannsrør fra kum
til andre kum'
sentences:
- 'query: * Fortsatt ledig: Tilstandsrapport'
- 'query: Vannpumpe fra brønn og filter.'
- 'query: Byggtegning Fasade'
- source_sentence: 'query: Tømming av parafintank'
sentences:
- 'query: Tegne endring på hus'
- 'query: Oljetank'
- 'query: Renovering av bad'
- source_sentence: 'query: Endre planløsning, tegne nytt kjøkken, nytt bad og nytt
omkledningsrom/vaskerom'
sentences:
- 'query: Bygge hybel i kjelleren'
- 'query: Bygging av støttemur'
- 'query: Riving av bygg.'
- source_sentence: 'query: Service på varmepumpe'
sentences:
- 'query: Masseutskifting - klargjøre for asfaltering'
- 'query: Montere et komplett HTH kjøkken'
- 'query: Service av varmepumpe'
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ostoveland/test5")
# Run inference
sentences = [
'query: Service på varmepumpe',
'query: Service av varmepumpe',
'query: Montere et komplett HTH kjøkken',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 24,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.21 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.93 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.42 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------|:------------------------------------------------|:--------------------------------------------------|
| <code>query: Bygge terrasse</code> | <code>query: Legge ca 60-70kvm terrasse.</code> | <code>query: Etterisolering av loft</code> |
| <code>query: Felle plommetre og ta med et epletre</code> | <code>query: Felling av 5 trær</code> | <code>query: Total Renovering</code> |
| <code>query: Maling av enebolig utvendig</code> | <code>query: Malearbeid Vedlikehold</code> | <code>query: Tilbygg 37,5 kvm til enebolig</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3333 | 500 | 0.4576 |
| 0.6667 | 1000 | 0.2169 |
| 1.0 | 1500 | 0.168 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ostoveland/test4
|
ostoveland
| 2024-06-22T22:03:19Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:24000",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-22T22:02:39Z |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:24000
- loss:TripletLoss
widget:
- source_sentence: 'query: Bytte regulator varmekabler'
sentences:
- 'query: legge varmekabler i takrenner i sameie'
- 'query: Garasjeport'
- 'query: Skriftlig vurdering av fuktskade/vannskade i sokkeleilighet.'
- source_sentence: 'query: Opprette hybler i enebolig.'
sentences:
- 'query: Helrenovering av bad 2,4 m^2 og toalettrom'
- 'query: Innvendig paneling av hytte på Budor'
- 'query: Vurdere muligheter for lading av elbil/hybrid'
- source_sentence: 'query: Mikrosement'
sentences:
- 'query: Legge plater med sløyfer til vannbåren varme 45 m2'
- 'query: Mikrosement på bad'
- 'query: * Fortsatt ledig: Spraylakkere 4 spisestuestoler'
- source_sentence: 'query: Ny hage til nytt hus ca 400 kvm'
sentences:
- 'query: Nytt lag med singel i innkjørsel'
- 'query: Skifte bordkledning'
- 'query: Reparere murtrapp IG legge skiferstein'
- source_sentence: 'query: Betongskjæring'
sentences:
- 'query: * Fortsatt ledig: Membran legging'
- 'query: Drenering av hus'
- 'query: Saging av hull til vindu'
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ostoveland/test4")
# Run inference
sentences = [
'query: Betongskjæring',
'query: Saging av hull til vindu',
'query: Drenering av hus',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 24,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.31 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.29 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.93 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-------------------------------------------------------------------------------------|:-------------------------------------------|:----------------------------------------------|
| <code>query: Installere radonsug/radonvifte i kjeller</code> | <code>query: Radon sikring enebolig</code> | <code>query: Mikrosement på bad</code> |
| <code>query: Bytte nedre del av en takrenne i klassisk bygård (fra 2. etasje)</code> | <code>query: Pipebeslag</code> | <code>query: Riving av bad</code> |
| <code>query: Gjerde</code> | <code>query: Flettverkgjerde 65 m</code> | <code>query: glassplate til salongbord</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3333 | 500 | 0.4725 |
| 0.6667 | 1000 | 0.2214 |
| 1.0 | 1500 | 0.1647 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
powermove72/SharkOgno-11b-Passthrough
|
powermove72
| 2024-06-22T22:00:37Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Shark-1",
"eren23/OGNO-7b-dpo-truthful",
"conversational",
"custom_code",
"base_model:eren23/OGNO-7b-dpo-truthful",
"base_model:merge:eren23/OGNO-7b-dpo-truthful",
"base_model:powermove72/Shark-1",
"base_model:merge:powermove72/Shark-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T21:52:05Z |
---
base_model:
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Shark-1
- eren23/OGNO-7b-dpo-truthful
---
# SharkOgno2-11b-Passthrough
SharkOgno2-7b-Passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [powermove72/Shark-1](https://huggingface.co/powermove72/Shark-1)
* [eren23/OGNO-7b-dpo-truthful](https://huggingface.co/eren23/OGNO-7b-dpo-truthful)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: powermove72/Shark-1
layer_range: [0, 24]
- sources:
- model: eren23/OGNO-7b-dpo-truthful
layer_range: [8, 32]
merge_method: passthrough
tokenizer_source: union
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "powermove72/SharkOgno2-7b-Passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
John6666/wai-real-cn-v6-sdxl-spo
|
John6666
| 2024-06-22T21:56:51Z | 2,594 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T21:51:45Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
- SPO
---
Original model is [here](https://civitai.com/models/469902?modelVersionId=583715).
|
sidvash/famus_exh_task2_unsloth_llama-3-8b-Instruct-bnb-4bit-merged_16bit
|
sidvash
| 2024-06-22T21:46:06Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-28T05:24:51Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** sidvash
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Task
GIven a document, and an event type from FrameNet, extract all instances of that event type from the document.
# Data
v0.1
This model used FAMuS train set (759 examples):
- Each instance had a gold extracted event + silver generated extracted events of the same type as extracted from Gemini-1.5-pro model with a 10-shot ICL gold annotated examples
- All instances are positive data (i.e. there is at least one instance of the event type present in the data)
More details: TBD
|
John6666/himawari-mix-xl-v13-sdxl-spo
|
John6666
| 2024-06-22T21:40:50Z | 2,435 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"SPO",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T21:34:27Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- SPO
---
Original model is [here](https://civitai.com/models/131611?modelVersionId=558064).
|
fruk19/E_ASR_MID
|
fruk19
| 2024-06-22T21:36:14Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"th",
"dataset:fruk19/E_SMALL",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-22T13:48:57Z |
---
language:
- th
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- fruk19/E_SMALL
metrics:
- wer
model-index:
- name: South_asri
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: aicookcook
type: fruk19/E_SMALL
config: default
split: None
args: 'config: th'
metrics:
- name: Wer
type: wer
value: 6.109316028130006
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# South_asri
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aicookcook dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0666
- Wer: 6.1093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0464 | 2.0 | 6000 | 0.0702 | 9.2237 |
| 0.0095 | 4.0 | 12000 | 0.0648 | 6.6171 |
| 0.0007 | 6.0 | 18000 | 0.0666 | 6.1093 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
John6666/chacol-omega-mix-v11a-sdxl-spo
|
John6666
| 2024-06-22T21:35:09Z | 2,442 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T21:29:04Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://civitai.com/models/456108?modelVersionId=507746).
|
ostoveland/test3
|
ostoveland
| 2024-06-22T21:35:07Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2400",
"loss:TripletLoss",
"loss:MultipleNegativesRankingLoss",
"loss:CoSENTLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-22T21:34:43Z |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2400
- loss:TripletLoss
- loss:MultipleNegativesRankingLoss
- loss:CoSENTLoss
widget:
- source_sentence: Flislegging av hall
sentences:
- 'query: tapetsering av rom med grunnflate 4x4.5 meter minus tre dører'
- 'query: fliser i hall'
- 'query: fornye markiseduk'
- source_sentence: Betongskjæring av rømningsvindu
sentences:
- Installere ventilasjonssystem
- Installere nytt vindu i trevegg
- Skjære ut rømningsvindu i betongvegg
- source_sentence: Ny garasje leddport
sentences:
- Installere garasjeport
- Bygge ny garasje
- Legge nytt tak
- source_sentence: Legge varmefolie i gang og stue.
sentences:
- Strø grusveier med salt
- Legge varmekabler
- Installere gulvvarme
- source_sentence: Oppgradere kjeller til boareale
sentences:
- Oppussing av kjeller for boligformål
- elektriker på bolig på 120kvm
- Installere dusjkabinett
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: triplet
name: Triplet
dataset:
name: test triplet evaluation
type: test-triplet-evaluation
metrics:
- type: cosine_accuracy
value: 0.7470049330514447
name: Cosine Accuracy
- type: dot_accuracy
value: 0.31853417899929526
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.740662438336857
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.7420718816067653
name: Euclidean Accuracy
- type: max_accuracy
value: 0.7470049330514447
name: Max Accuracy
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision bf3bf13ab40c3157080a7ab344c831b9ad18b5eb -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ostoveland/test3")
# Run inference
sentences = [
'Oppgradere kjeller til boareale',
'Oppussing av kjeller for boligformål',
'Installere dusjkabinett',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `test-triplet-evaluation`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:----------|
| cosine_accuracy | 0.747 |
| dot_accuracy | 0.3185 |
| manhattan_accuracy | 0.7407 |
| euclidean_accuracy | 0.7421 |
| **max_accuracy** | **0.747** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### Unnamed Dataset
* Size: 800 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.91 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.87 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.14 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:----------------------------------------------|:-------------------------------------------|:------------------------------------------|
| <code>Oppussing av stue</code> | <code>Renovere stue</code> | <code>Male stue</code> |
| <code>Sameie søker vaktmestertjenester</code> | <code>Trenger vaktmester til sameie</code> | <code>Renholdstjenester for sameie</code> |
| <code>Sprenge og klargjøre til garasje</code> | <code>Grave ut til garasje</code> | <code>Bygge garasje</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
#### Unnamed Dataset
* Size: 800 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.36 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.36 tokens</li><li>max: 26 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------|:---------------------------------------------------------------------|
| <code>Helsparkle rom med totale veggflater på ca 20 m2</code> | <code>query: helsparkling av rom med 20 m2 veggflater</code> |
| <code>Reparere skifer tak og tak vindu</code> | <code>query: fikse takvindu og skifertak</code> |
| <code>Pigge opp flisgulv, fjerne gips vegger og gipstak - 11 kvm</code> | <code>query: fjerne flisgulv, gipsvegger og gipstak på 11 kvm</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### Unnamed Dataset
* Size: 800 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.32 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.18 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.51</li><li>max: 0.95</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------|:---------------------------------------------------|:------------------|
| <code>Legging av våtromsbelegg</code> | <code>Renovering av bad</code> | <code>0.65</code> |
| <code>overvåkingskamera 3stk</code> | <code>installasjon av 3 overvåkingskameraer</code> | <code>0.95</code> |
| <code>Bytte lamper i portrom</code> | <code>Male portrom</code> | <code>0.15</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | test-triplet-evaluation_max_accuracy |
|:-----:|:----:|:------------------------------------:|
| 1.0 | 75 | 0.7470 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ostoveland/test2
|
ostoveland
| 2024-06-22T21:32:40Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2400",
"loss:TripletLoss",
"loss:MultipleNegativesRankingLoss",
"loss:CoSENTLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-22T21:31:41Z |
---
base_model: intfloat/multilingual-e5-base
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2400
- loss:TripletLoss
- loss:MultipleNegativesRankingLoss
- loss:CoSENTLoss
widget:
- source_sentence: oppgradering av sikringsskap med nye sikringer
sentences:
- 'query: pipearbeid i kjeller'
- 'query: utskifting av sikringer i sikringsskap'
- 'query: arkitekttegning av tilbygg'
- source_sentence: Renovere soverom og stue
sentences:
- Utvidelse av enebolig
- Male soverom og stue
- Pusse opp soverom og stue
- source_sentence: Fjerne vegg-til-vegg teppe, mugg under teppet og legge parkett
sentences:
- Legge nytt parkettgulv
- Rengjøre tepper
- Installere kjøkkenvifte
- source_sentence: Riving av gammelt kjøkken og montering av nytt kjøkken
sentences:
- Installere automatsikringer
- Pusse opp kjøkken
- Bytte kjøkken
- source_sentence: Stålrør i pipe
sentences:
- Tette lekkasje i pipe
- Asfaltering av oppkjørsel
- Gravearbeid i hagen
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: triplet
name: Triplet
dataset:
name: test triplet evaluation
type: test-triplet-evaluation
metrics:
- type: cosine_accuracy
value: 0.9140239605355884
name: Cosine Accuracy
- type: dot_accuracy
value: 0.08597603946441155
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.9126145172656801
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.9140239605355884
name: Euclidean Accuracy
- type: max_accuracy
value: 0.9140239605355884
name: Max Accuracy
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ostoveland/test2")
# Run inference
sentences = [
'Stålrør i pipe',
'Tette lekkasje i pipe',
'Gravearbeid i hagen',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `test-triplet-evaluation`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:----------|
| cosine_accuracy | 0.914 |
| dot_accuracy | 0.086 |
| manhattan_accuracy | 0.9126 |
| euclidean_accuracy | 0.914 |
| **max_accuracy** | **0.914** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### Unnamed Dataset
* Size: 800 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.91 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.87 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.14 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:----------------------------------------------------------|:-----------------------------------------------|:------------------------------------|
| <code>søknad om dispensasjon fra reguleringsformål</code> | <code>Søknad om byggetillatelse</code> | <code>Søknad om bruksendring</code> |
| <code>Mikrosement på bad</code> | <code>Påføring av mikrosement i baderom</code> | <code>Flislegging på bad</code> |
| <code>Garasje</code> | <code>Bygge garasje</code> | <code>Renovere garasje</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
#### Unnamed Dataset
* Size: 800 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.36 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.36 tokens</li><li>max: 26 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------|:-------------------------------------------------------------------|
| <code>Riving av betongtrapp</code> | <code>query: demontere betongtrapp</code> |
| <code>vurdering av bærebjelker</code> | <code>query: inspeksjon av bærebjelker</code> |
| <code>bytte av skrusikringer i sikringsskap</code> | <code>query: oppgradering av sikringsskap med nye sikringer</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### Unnamed Dataset
* Size: 800 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.32 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.18 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.51</li><li>max: 0.95</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------------------------------------------------------|:----------------------------------------------|:------------------|
| <code>Reparere skader av rekkeverk (metallplater) på en balkong</code> | <code>Installere nytt balkongrekkverk</code> | <code>0.35</code> |
| <code>Vannbåren varme - ettermontering</code> | <code>Oppgradering til vannbåren varme</code> | <code>0.75</code> |
| <code>Pusse pipemur</code> | <code>Maling av peis</code> | <code>0.15</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | test-triplet-evaluation_max_accuracy |
|:-----:|:----:|:------------------------------------:|
| 1.0 | 75 | 0.9140 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
excalibur12/saq_asr-scr_w2v2-base_001
|
excalibur12
| 2024-06-22T21:29:18Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T18:36:42Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: saq_asr-scr_w2v2-base_001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saq_asr-scr_w2v2-base_001
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2549
- Per: 0.1327
- Pcc: 0.6578
- Ctc Loss: 0.4805
- Mse Loss: 1.0040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 1
- seed: 1111
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 742
- training_steps: 7420
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per | Pcc | Ctc Loss | Mse Loss |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:--------:|
| 11.7284 | 1.0 | 742 | 4.5186 | 0.9994 | 0.5696 | 3.7385 | 0.9749 |
| 3.1398 | 2.0 | 1484 | 2.4884 | 0.2042 | 0.6246 | 0.7601 | 1.6844 |
| 1.5121 | 3.0 | 2226 | 1.5395 | 0.1627 | 0.6359 | 0.5898 | 0.9117 |
| 1.0897 | 4.0 | 2968 | 1.4423 | 0.1551 | 0.6390 | 0.5386 | 0.8928 |
| 0.6968 | 5.0 | 3710 | 1.5142 | 0.1477 | 0.6443 | 0.5085 | 1.0010 |
| 0.3184 | 6.0 | 4452 | 1.8725 | 0.1411 | 0.6557 | 0.4879 | 1.2796 |
| -0.0502 | 7.0 | 5194 | 1.4015 | 0.1387 | 0.6577 | 0.4808 | 1.0161 |
| -0.3567 | 8.0 | 5936 | 1.3481 | 0.1345 | 0.6557 | 0.4852 | 1.0170 |
| -0.5908 | 9.0 | 6678 | 1.2779 | 0.1340 | 0.6604 | 0.4810 | 1.0066 |
| -0.7364 | 10.0 | 7420 | 1.2549 | 0.1327 | 0.6578 | 0.4805 | 1.0040 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.2
|
kanishka/smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-seed_211-1e-3
|
kanishka
| 2024-06-22T21:21:29Z | 64 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual_babylm_measure_nps_as_singular_new",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T22:44:51Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_measure_nps_as_singular_new
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-seed_211-1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_measure_nps_as_singular_new
type: kanishka/counterfactual_babylm_measure_nps_as_singular_new
metrics:
- name: Accuracy
type: accuracy
value: 0.4093553697888651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-seed_211-1e-3
This model was trained from scratch on the kanishka/counterfactual_babylm_measure_nps_as_singular_new dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4270
- Accuracy: 0.4094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.6072 | 1.0 | 18602 | 3.7687 | 0.3592 |
| 3.3848 | 2.0 | 37204 | 3.5595 | 0.3802 |
| 3.2576 | 3.0 | 55806 | 3.4654 | 0.3927 |
| 3.177 | 4.0 | 74408 | 3.4207 | 0.3982 |
| 3.1212 | 5.0 | 93010 | 3.4026 | 0.4006 |
| 3.0724 | 6.0 | 111612 | 3.3763 | 0.4035 |
| 3.0373 | 7.0 | 130214 | 3.3708 | 0.4051 |
| 3.0102 | 8.0 | 148816 | 3.3649 | 0.4063 |
| 2.9818 | 9.0 | 167418 | 3.3810 | 0.4072 |
| 2.9526 | 10.0 | 186020 | 3.3640 | 0.4078 |
| 2.9332 | 11.0 | 204622 | 3.3817 | 0.4081 |
| 2.9076 | 12.0 | 223224 | 3.3767 | 0.4087 |
| 2.8857 | 13.0 | 241826 | 3.3850 | 0.4089 |
| 2.8653 | 14.0 | 260428 | 3.3919 | 0.4093 |
| 2.8483 | 15.0 | 279030 | 3.3888 | 0.4091 |
| 2.828 | 16.0 | 297632 | 3.4040 | 0.4093 |
| 2.8069 | 17.0 | 316234 | 3.4020 | 0.4094 |
| 2.7906 | 18.0 | 334836 | 3.4096 | 0.4096 |
| 2.7701 | 19.0 | 353438 | 3.4215 | 0.4093 |
| 2.7515 | 20.0 | 372040 | 3.4270 | 0.4094 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
John6666/ebara-pony-v1-sdxl-spo
|
John6666
| 2024-06-22T21:20:56Z | 2,325 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T21:16:06Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://huggingface.co/tsukihara/xl_model).
|
RicardoMorim/ppo-Huggy
|
RicardoMorim
| 2024-06-22T21:19:30Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-06-22T21:19:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: RicardoMorim/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
blockblockblock/gpt2-bpw5-exl2
|
blockblockblock
| 2024-06-22T21:19:18Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-22T21:17:59Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
blockblockblock/gpt2-bpw5.5-exl2
|
blockblockblock
| 2024-06-22T21:16:49Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-22T21:15:32Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
kanishka/smolm-autoreg-bpe-counterfactual_babylm_indef_articles_with_pl_nouns_removal_new-1e-3
|
kanishka
| 2024-06-22T21:16:32Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T17:24:20Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new-1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new
type: kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new
metrics:
- name: Accuracy
type: accuracy
value: 0.4117169529419352
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new-1e-3
This model was trained from scratch on the kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4004
- Accuracy: 0.4117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5965 | 1.0 | 18600 | 3.7932 | 0.3590 |
| 3.376 | 2.0 | 37200 | 3.5949 | 0.3809 |
| 3.247 | 3.0 | 55800 | 3.4625 | 0.3933 |
| 3.1633 | 4.0 | 74400 | 3.4094 | 0.3999 |
| 3.1084 | 5.0 | 93000 | 3.3589 | 0.4061 |
| 3.0663 | 6.0 | 111600 | 3.3638 | 0.4077 |
| 3.0305 | 7.0 | 130200 | 3.3580 | 0.4081 |
| 2.994 | 8.0 | 148800 | 3.3293 | 0.4100 |
| 2.9664 | 9.0 | 167400 | 3.3262 | 0.4114 |
| 2.942 | 10.0 | 186000 | 3.3377 | 0.4105 |
| 2.9136 | 11.0 | 204600 | 3.3401 | 0.4118 |
| 2.8886 | 12.0 | 223200 | 3.3339 | 0.4125 |
| 2.8701 | 13.0 | 241800 | 3.3341 | 0.4137 |
| 2.8515 | 14.0 | 260400 | 3.3494 | 0.4125 |
| 2.8292 | 15.0 | 279000 | 3.3648 | 0.4116 |
| 2.8094 | 16.0 | 297600 | 3.3643 | 0.4128 |
| 2.7851 | 17.0 | 316200 | 3.3658 | 0.4125 |
| 2.7685 | 18.0 | 334800 | 3.3846 | 0.4120 |
| 2.7454 | 19.0 | 353400 | 3.3961 | 0.4116 |
| 2.7269 | 20.0 | 372000 | 3.4004 | 0.4117 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
blockblockblock/gpt2-bpw6-exl2
|
blockblockblock
| 2024-06-22T21:14:22Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-22T21:13:09Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
blockblockblock/TinyLlama_v1.1-bpw2.5-exl2
|
blockblockblock
| 2024-06-22T21:07:23Z | 9 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"arxiv:2401.02385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-22T21:07:01Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
language:
- en
---
# TinyLlama-1.1B-v1.1
- **Codebase:** [github.com/jzhang38/TinyLlama](https://github.com/jzhang38/TinyLlama)
- **Technical Report:** [arxiv.org/pdf/2401.02385](https://arxiv.org/pdf/2401.02385)
<div align="center">
<img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
## Overview
In this project, rather than only training a single TinyLlama model, we first train TinyLlama on a corpus of 1.5 trillion tokens to obtain foundational language capabilities. Subsequently, we take this model and turn it into three different models by continual pre-training with three distinct data sampling. For a visual representation of this process, please refer to the figure below.

## Pretraining
Due to these issues([bug1](https://whimsical-aphid-86d.notion.site/Release-of-TinyLlama-1-5T-Checkpoints-Postponed-01b266998c1c47f78f5ae1520196d194?pvs=4), [bug2](https://whimsical-aphid-86d.notion.site/2023-12-18-Updates-from-TinyLlama-Team-7d30c01fff794da28ccc952f327c8d4f)). We try to retrain our TinyLlama to provide a better model. We train our model with 2T tokens and divided our pretraining into 3 different stages: 1) basic pretraining, 2) continual pretraining with specific domain, and 3) cooldown .
#### Basic pretraining
In this initial phase, we managed to train our model with only slimpajama to develop its commonsense reasoning capabilities. The model was trained with 1.5T tokens during this basic pretraining period. Since we used a cluster with 4 A100-40G per node and we only shard model weights within a node, we can only set the batch size to approximately 1.8M this time.
#### Continual pretraining with specific domain
We incorporated 3 different kinds of corpus during this pretraining, slimpajama (which is the same as the first phase), Math&Code (starcoder and proof pile), and Chinese (Skypile). This approach allowed us to develop three variant models with specialized capabilities.
At the begining ~6B tokens in this stage, we linearly increased the sampling proportion for the domain-specific corpus (excluding Slimpajama, as it remained unchanged compared with stage 1). This warmup sampling increasing strategy was designed to gradually adjust the distribution of the pretraining data, ensuring a more stable training process. After this sampling increasing stage, we continued pretraining the model with stable sampling strategy until reaching ~1.85T tokens.
#### Cooldown
Implementing a cooldown phase has become a crucial technique to achieve better model convergence at the end of pretraining. However, since we have already used cosine learning rate strategy at the beginning, it becomes challenging to alter the learning rate for cooldown like what MiniCPM or deepseek does. Therefore, we try to cool down with adjusting our batch size. Specifically, we increase our batch size from 1.8M to 7.2M while keeping the original cosine learning rate schedule during our cooldown stage.
#### Tinyllama model family
Following an extensive and detailed pretraining process. We are now releasing three specialized versions of our model:
1. **TinyLlama_v1.1**: The standard version, used for general purposes.
2. **TinyLlama_v1.1_Math&Code**: Equipped with better ability for math and code.
3. **TinyLlama_v1.1_Chinese**: Good understanding capacity for Chinese.
## Data
Here we list our data distribution in each stage:
### TinyLlama_v1.1
| Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 100.0 | 100.0 |
### TinyLlama_v1.1_math_code
| Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 75.0 | 75.0 |
| starcoder | - | 15.0 | 15.0 |
| proof_pile | - | 10.0 | 10.0 |
### TinyLlama_v1.1_chinese
| orpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 50.0 | 50.0 |
| skypile | - | 50.0 | 50.0 |
### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) GitHub page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "TinyLlama/TinyLlama_v1.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
| ----------------------------------------- | --------------- | --------- | --------- | ---------- | --------- | --------- | ----- | --------- | --------- |
| Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
| TinyLlama-1.1B-v1.1 | 2T | **61.47** | **36.80** | 59.43 | 32.68 | **55.47** | 55.99 | **73.56** | 53.63 |
| TinyLlama-1.1B-v1_math_code | 2T | 60.80 | 36.40 | **60.22** | **33.87** | 55.20 | 57.09 | 72.69 | **53.75** |
| TinyLlama-1.1B-v1.1_chinese | 2T | 58.23 | 35.20 | 59.27 | 31.40 | 55.35 | **61.41** | 73.01 | 53.41 |
|
darkcloudai/huskylm-2.5-8b-AWQ
|
darkcloudai
| 2024-06-22T21:06:34Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-06-22T21:02:49Z |
---
license: llama3
---
AWQ (bits: 4, gs: 128, version: gemm) format weights for [https://huggingface.co/darkcloudai/huskylm-2.5-8b](https://huggingface.co/darkcloudai/huskylm-2.5-8b).
|
John6666/sympony-v2-fuga-sdxl
|
John6666
| 2024-06-22T21:05:56Z | 2,388 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T21:00:30Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/506645/sympony?modelVersionId=591018).
|
tsavage68/Summary_L3_1000steps_1e7rate_03beta_CSFTDPO
|
tsavage68
| 2024-06-22T21:03:23Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"base_model:finetune:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T20:54:50Z |
---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_1000steps_1e7rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_1000steps_1e7rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5964
- Rewards/chosen: 0.0711
- Rewards/rejected: -1.1551
- Rewards/accuracies: 0.1400
- Rewards/margins: 1.2262
- Logps/rejected: -19.1142
- Logps/chosen: -9.1459
- Logits/rejected: -1.1071
- Logits/chosen: -1.1083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6831 | 0.2004 | 50 | 0.6816 | 0.0015 | -0.0238 | 0.1300 | 0.0253 | -15.3431 | -9.3779 | -1.0962 | -1.0977 |
| 0.6795 | 0.4008 | 100 | 0.6463 | 0.0093 | -0.1112 | 0.1400 | 0.1205 | -15.6344 | -9.3518 | -1.0932 | -1.0948 |
| 0.6329 | 0.6012 | 150 | 0.6076 | 0.0323 | -0.3453 | 0.1400 | 0.3776 | -16.4149 | -9.2751 | -1.0926 | -1.0943 |
| 0.6091 | 0.8016 | 200 | 0.5997 | 0.0442 | -0.5668 | 0.1400 | 0.6110 | -17.1532 | -9.2355 | -1.0949 | -1.0965 |
| 0.6241 | 1.0020 | 250 | 0.5974 | 0.0514 | -0.7694 | 0.1400 | 0.8208 | -17.8283 | -9.2113 | -1.0983 | -1.0999 |
| 0.6239 | 1.2024 | 300 | 0.5969 | 0.0644 | -0.8984 | 0.1400 | 0.9628 | -18.2584 | -9.1680 | -1.1014 | -1.1028 |
| 0.624 | 1.4028 | 350 | 0.5965 | 0.0676 | -0.9908 | 0.1400 | 1.0585 | -18.5665 | -9.1573 | -1.1032 | -1.1046 |
| 0.5728 | 1.6032 | 400 | 0.5965 | 0.0722 | -1.0529 | 0.1400 | 1.1250 | -18.7733 | -9.1423 | -1.1052 | -1.1066 |
| 0.5893 | 1.8036 | 450 | 0.5964 | 0.0748 | -1.0956 | 0.1400 | 1.1704 | -18.9158 | -9.1336 | -1.1062 | -1.1075 |
| 0.5719 | 2.0040 | 500 | 0.5964 | 0.0693 | -1.1155 | 0.1400 | 1.1848 | -18.9820 | -9.1518 | -1.1066 | -1.1079 |
| 0.5719 | 2.2044 | 550 | 0.5964 | 0.0760 | -1.1221 | 0.1400 | 1.1981 | -19.0042 | -9.1295 | -1.1069 | -1.1082 |
| 0.5546 | 2.4048 | 600 | 0.5964 | 0.0686 | -1.1465 | 0.1400 | 1.2151 | -19.0856 | -9.1542 | -1.1071 | -1.1084 |
| 0.52 | 2.6052 | 650 | 0.5964 | 0.0707 | -1.1510 | 0.1400 | 1.2217 | -19.1005 | -9.1471 | -1.1066 | -1.1079 |
| 0.6243 | 2.8056 | 700 | 0.5963 | 0.0745 | -1.1541 | 0.1400 | 1.2286 | -19.1107 | -9.1345 | -1.1075 | -1.1088 |
| 0.6065 | 3.0060 | 750 | 0.5963 | 0.0758 | -1.1510 | 0.1400 | 1.2268 | -19.1006 | -9.1301 | -1.1071 | -1.1084 |
| 0.6412 | 3.2064 | 800 | 0.5964 | 0.0704 | -1.1555 | 0.1400 | 1.2259 | -19.1153 | -9.1480 | -1.1070 | -1.1083 |
| 0.6585 | 3.4068 | 850 | 0.5963 | 0.0726 | -1.1522 | 0.1400 | 1.2248 | -19.1045 | -9.1408 | -1.1073 | -1.1086 |
| 0.6238 | 3.6072 | 900 | 0.5963 | 0.0735 | -1.1585 | 0.1400 | 1.2320 | -19.1256 | -9.1378 | -1.1071 | -1.1084 |
| 0.5372 | 3.8076 | 950 | 0.5964 | 0.0711 | -1.1551 | 0.1400 | 1.2262 | -19.1142 | -9.1459 | -1.1071 | -1.1083 |
| 0.6239 | 4.0080 | 1000 | 0.5964 | 0.0711 | -1.1551 | 0.1400 | 1.2262 | -19.1142 | -9.1459 | -1.1071 | -1.1083 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
sid-du/model-upload-test
|
sid-du
| 2024-06-22T21:03:08Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T20:54:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ilhami/AcademicTranslation2024-tr-to-en
|
ilhami
| 2024-06-22T20:57:36Z | 20 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"chemistry",
"biology",
"medical",
"translation",
"tr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-05T23:03:43Z |
---
license: apache-2.0
language:
- tr
- en
metrics:
- bleu
pipeline_tag: translation
tags:
- chemistry
- biology
- medical
---
checkpoint = "ilhami/AcademicTranslation2024-tr-to-en"
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to("cuda")
tr= ["Sohbet robotları son yıllarda yaygın bir şekilde kullanılmaya başlanmıştır. ",
"İnsanları taklit eden ve daha iyi müşteri memnuniyeti sağlayan sohbet robotları en gelişkin doğal dil işleme tekniklerine ihtiyaç duymaktadır. ",
"Bu çalışma sohbet robotu konuşmalarının niyet tahminini geliştirmeye odaklanmıştır." ,
"Kelime gösterimi için TF-IDF, Doc2vec ve BERT gibi geleneksel ve gelişmiş doğal dil işleme yöntemleri, çoklu sınıf ve çoklu etiket tahmini için ise lojistik regresyon, rastgele orman ve yapay sinir ağları kullanılmıştır." ,
"Sohbet robotu konuşma veri kümeleri, sinema bileti rezervasyonu, restoran rezervasyonu ve taksi çağırma olmak üzere üç farklı alandan alınmıştır. ",
"Bu çalışmanın sonunda, BERT ve BERT ile TF-IDF birleşimi modellerin diğer kombinasyonlardan daha iyi sonuç verdiği görülmüştür. ",
"BERT gibi ön eğitimli modellerden faydalanmanın daha iyi bağlamsal anlama sağladığı ortaya çıkmıştır. ",
"TF-IDF yerleştirmeleri, BERT gösterimi ile birleştirilerek niyet kategorisi tahmininin iyileştirilmesi amaçlanmıştır."]
encoded_text = tokenizer(tr, return_tensors="pt", padding = True).to("cuda")
generated_tokens = model.generate(**encoded_text)
en = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
|
John6666/cute-core-v1-sdxl
|
John6666
| 2024-06-22T20:53:49Z | 2,392 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-22T20:49:00Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://civitai.com/models/129282?modelVersionId=300618).
|
danielkosyra/polynomial_1450_7e-4_16b_w0.05
|
danielkosyra
| 2024-06-22T20:47:56Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T20:47:37Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: polynomial_1450_7e-4_16b_w0.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polynomial_1450_7e-4_16b_w0.05
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- training_steps: 1450
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 9.0635 | 0.1029 | 50 | 7.2771 |
| 6.7176 | 0.2058 | 100 | 6.2551 |
| 6.0127 | 0.3088 | 150 | 5.7232 |
| 5.5517 | 0.4117 | 200 | 5.3470 |
| 5.2297 | 0.5146 | 250 | 5.0446 |
| 4.9361 | 0.6175 | 300 | 4.7729 |
| 4.6976 | 0.7205 | 350 | 4.5588 |
| 4.497 | 0.8234 | 400 | 4.3733 |
| 4.3221 | 0.9263 | 450 | 4.1939 |
| 4.1357 | 1.0292 | 500 | 4.0081 |
| 3.892 | 1.1322 | 550 | 3.8139 |
| 3.7559 | 1.2351 | 600 | 3.6703 |
| 3.6297 | 1.3380 | 650 | 3.5671 |
| 3.5399 | 1.4409 | 700 | 3.4772 |
| 3.4656 | 1.5438 | 750 | 3.4074 |
| 3.3949 | 1.6468 | 800 | 3.3532 |
| 3.3297 | 1.7497 | 850 | 3.3031 |
| 3.2878 | 1.8526 | 900 | 3.2604 |
| 3.254 | 1.9555 | 950 | 3.2267 |
| 3.1231 | 2.0585 | 1000 | 3.1899 |
| 3.0568 | 2.1614 | 1050 | 3.1603 |
| 3.0347 | 2.2643 | 1100 | 3.1349 |
| 3.0197 | 2.3672 | 1150 | 3.1148 |
| 2.9893 | 2.4702 | 1200 | 3.0940 |
| 2.9801 | 2.5731 | 1250 | 3.0725 |
| 2.951 | 2.6760 | 1300 | 3.0551 |
| 2.9265 | 2.7789 | 1350 | 3.0397 |
| 2.9438 | 2.8818 | 1400 | 3.0299 |
| 2.9292 | 2.9848 | 1450 | 3.0237 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Anujgr8/Whisper-Anuj-Medum-Marathi
|
Anujgr8
| 2024-06-22T20:31:21Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-22T15:33:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Essacheez/gemma-1.1-7b-it-finetune-summerization-10k-gemma-style
|
Essacheez
| 2024-06-22T20:16:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T19:22:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
itisarainyday/llemma-2-7b-ft-merged-v8
|
itisarainyday
| 2024-06-22T20:11:41Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T15:52:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gechim/XMLRoberta_Dataset59KBoDuoi
|
gechim
| 2024-06-22T19:47:00Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T19:46:28Z |
---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: XMLRoberta_Dataset59KBoDuoi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XMLRoberta_Dataset59KBoDuoi
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4792
- Accuracy: 0.8964
- F1: 0.8969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| No log | 0.5115 | 200 | 0.4025 | 0.8084 | 0.8111 |
| No log | 1.0230 | 400 | 0.3500 | 0.8424 | 0.8451 |
| No log | 1.5345 | 600 | 0.3312 | 0.8637 | 0.8612 |
| 0.4018 | 2.0460 | 800 | 0.3394 | 0.8580 | 0.8610 |
| 0.4018 | 2.5575 | 1000 | 0.2938 | 0.8747 | 0.8760 |
| 0.4018 | 3.0691 | 1200 | 0.2903 | 0.8829 | 0.8841 |
| 0.4018 | 3.5806 | 1400 | 0.2871 | 0.8854 | 0.8859 |
| 0.2576 | 4.0921 | 1600 | 0.2955 | 0.8864 | 0.8873 |
| 0.2576 | 4.6036 | 1800 | 0.2831 | 0.8887 | 0.8894 |
| 0.2576 | 5.1151 | 2000 | 0.2952 | 0.8885 | 0.8898 |
| 0.2576 | 5.6266 | 2200 | 0.2947 | 0.8872 | 0.8881 |
| 0.2036 | 6.1381 | 2400 | 0.3086 | 0.8887 | 0.8902 |
| 0.2036 | 6.6496 | 2600 | 0.2939 | 0.8924 | 0.8931 |
| 0.2036 | 7.1611 | 2800 | 0.3368 | 0.8879 | 0.8895 |
| 0.2036 | 7.6726 | 3000 | 0.3162 | 0.8924 | 0.8932 |
| 0.1616 | 8.1841 | 3200 | 0.3423 | 0.8909 | 0.8919 |
| 0.1616 | 8.6957 | 3400 | 0.3475 | 0.8940 | 0.8945 |
| 0.1616 | 9.2072 | 3600 | 0.3546 | 0.8914 | 0.8923 |
| 0.1616 | 9.7187 | 3800 | 0.3505 | 0.8941 | 0.8947 |
| 0.1291 | 10.2302 | 4000 | 0.3850 | 0.8934 | 0.8941 |
| 0.1291 | 10.7417 | 4200 | 0.3718 | 0.8957 | 0.8963 |
| 0.1291 | 11.2532 | 4400 | 0.3893 | 0.8916 | 0.8924 |
| 0.1291 | 11.7647 | 4600 | 0.3923 | 0.8949 | 0.8955 |
| 0.1047 | 12.2762 | 4800 | 0.4213 | 0.8959 | 0.8968 |
| 0.1047 | 12.7877 | 5000 | 0.3877 | 0.8951 | 0.8961 |
| 0.1047 | 13.2992 | 5200 | 0.3972 | 0.8990 | 0.8992 |
| 0.1047 | 13.8107 | 5400 | 0.3896 | 0.8928 | 0.8937 |
| 0.0865 | 14.3223 | 5600 | 0.4290 | 0.8961 | 0.8964 |
| 0.0865 | 14.8338 | 5800 | 0.4360 | 0.8977 | 0.8979 |
| 0.0865 | 15.3453 | 6000 | 0.4398 | 0.8958 | 0.8963 |
| 0.0865 | 15.8568 | 6200 | 0.4357 | 0.8951 | 0.8955 |
| 0.0726 | 16.3683 | 6400 | 0.4662 | 0.8952 | 0.8953 |
| 0.0726 | 16.8798 | 6600 | 0.4608 | 0.8945 | 0.8955 |
| 0.0726 | 17.3913 | 6800 | 0.4714 | 0.8952 | 0.8954 |
| 0.0726 | 17.9028 | 7000 | 0.4638 | 0.8967 | 0.8971 |
| 0.0612 | 18.4143 | 7200 | 0.4783 | 0.8969 | 0.8971 |
| 0.0612 | 18.9258 | 7400 | 0.4856 | 0.8962 | 0.8967 |
| 0.0612 | 19.4373 | 7600 | 0.4779 | 0.8958 | 0.8963 |
| 0.0612 | 19.9488 | 7800 | 0.4792 | 0.8964 | 0.8969 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
tsavage68/Summary_L3_1000steps_1e8rate_03beta_CSFTDPO
|
tsavage68
| 2024-06-22T19:46:44Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"base_model:finetune:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T19:43:06Z |
---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_1000steps_1e8rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_1000steps_1e8rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Rewards/chosen: -0.0023
- Rewards/rejected: -0.0059
- Rewards/accuracies: 0.0650
- Rewards/margins: 0.0036
- Logps/rejected: -15.2835
- Logps/chosen: -9.3904
- Logits/rejected: -1.0962
- Logits/chosen: -1.0977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6866 | 0.2004 | 50 | 0.6914 | -0.0024 | -0.0068 | 0.0750 | 0.0044 | -15.2865 | -9.3909 | -1.0958 | -1.0972 |
| 0.6966 | 0.4008 | 100 | 0.6896 | 0.0031 | -0.0051 | 0.0850 | 0.0082 | -15.2806 | -9.3724 | -1.0965 | -1.0979 |
| 0.6924 | 0.6012 | 150 | 0.6911 | -0.0000 | -0.0053 | 0.0850 | 0.0053 | -15.2813 | -9.3828 | -1.0957 | -1.0972 |
| 0.6908 | 0.8016 | 200 | 0.6901 | 0.0009 | -0.0058 | 0.0900 | 0.0066 | -15.2830 | -9.3799 | -1.0957 | -1.0971 |
| 0.6922 | 1.0020 | 250 | 0.6889 | 0.0008 | -0.0086 | 0.0950 | 0.0094 | -15.2923 | -9.3800 | -1.0959 | -1.0974 |
| 0.6944 | 1.2024 | 300 | 0.6906 | -0.0011 | -0.0069 | 0.0900 | 0.0058 | -15.2869 | -9.3865 | -1.0957 | -1.0971 |
| 0.6919 | 1.4028 | 350 | 0.6878 | 0.0019 | -0.0099 | 0.0900 | 0.0117 | -15.2966 | -9.3766 | -1.0961 | -1.0975 |
| 0.6937 | 1.6032 | 400 | 0.6879 | 0.0049 | -0.0067 | 0.0900 | 0.0116 | -15.2860 | -9.3664 | -1.0963 | -1.0977 |
| 0.6927 | 1.8036 | 450 | 0.6903 | 0.0001 | -0.0065 | 0.0850 | 0.0066 | -15.2854 | -9.3824 | -1.0962 | -1.0977 |
| 0.6917 | 2.0040 | 500 | 0.6922 | -0.0002 | -0.0030 | 0.0700 | 0.0028 | -15.2739 | -9.3835 | -1.0959 | -1.0973 |
| 0.6983 | 2.2044 | 550 | 0.6911 | -0.0014 | -0.0068 | 0.0750 | 0.0053 | -15.2863 | -9.3875 | -1.0960 | -1.0974 |
| 0.6901 | 2.4048 | 600 | 0.6902 | 0.0002 | -0.0065 | 0.0900 | 0.0067 | -15.2854 | -9.3820 | -1.0967 | -1.0982 |
| 0.6859 | 2.6052 | 650 | 0.6890 | 0.0027 | -0.0066 | 0.0950 | 0.0093 | -15.2858 | -9.3738 | -1.0964 | -1.0978 |
| 0.694 | 2.8056 | 700 | 0.6910 | 0.0002 | -0.0048 | 0.0850 | 0.0050 | -15.2799 | -9.3823 | -1.0963 | -1.0978 |
| 0.6909 | 3.0060 | 750 | 0.6936 | -0.0027 | -0.0025 | 0.0600 | -0.0002 | -15.2720 | -9.3918 | -1.0964 | -1.0978 |
| 0.6909 | 3.2064 | 800 | 0.6912 | -0.0017 | -0.0065 | 0.0650 | 0.0049 | -15.2855 | -9.3883 | -1.0963 | -1.0977 |
| 0.6929 | 3.4068 | 850 | 0.6914 | -0.0008 | -0.0054 | 0.0800 | 0.0047 | -15.2819 | -9.3853 | -1.0962 | -1.0976 |
| 0.6938 | 3.6072 | 900 | 0.6919 | -0.0023 | -0.0059 | 0.0650 | 0.0036 | -15.2835 | -9.3904 | -1.0962 | -1.0977 |
| 0.69 | 3.8076 | 950 | 0.6919 | -0.0023 | -0.0059 | 0.0650 | 0.0036 | -15.2835 | -9.3904 | -1.0962 | -1.0977 |
| 0.6968 | 4.0080 | 1000 | 0.6919 | -0.0023 | -0.0059 | 0.0650 | 0.0036 | -15.2835 | -9.3904 | -1.0962 | -1.0977 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
anhng94/output_qwen
|
anhng94
| 2024-06-22T19:44:06Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-06-15T05:54:56Z |
---
base_model: Qwen/Qwen2-7B-Instruct
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: output_qwen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_qwen
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 5.0
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
y1xing/OrpoLlama-3-8B-Instruct-LEARN
|
y1xing
| 2024-06-22T19:31:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T19:03:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lmstudio-community/DeepSeek-Coder-V2-Lite-Instruct-GGUF
|
lmstudio-community
| 2024-06-22T19:11:36Z | 74,785 | 37 | null |
[
"gguf",
"text-generation",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
"base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-17T18:01:28Z |
---
license: other
license_name: deepseek-license
license_link: LICENSE
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 16b
use_case: coding
release_date: 17-06-2024
model_creator: DeepSeek
prompt_template: DeepSeek Chat
system_prompt: none
base_model: DeepSeek
original_repo: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
---
## 💫 Community Model> DeepSeek-Coder-V2-Lite-Instruct by DeepSeek
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [DeepSeek](https://huggingface.co/deepseek-ai)<br>
**Original model**: [DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3166](https://github.com/ggerganov/llama.cpp/releases/tag/b3166)<br>
## Model Settings:
Requires LM Studio 0.2.25, update can be downloaded from here: https://lmstudio.ai
Flash attention MUST be **disabled** for this model to work.
## Model Summary:
This is a brand new Mixture of Export (MoE) model from DeepSeek, specializing in coding instructions.<br>
This model performs well across a series of coding benchmarks and should be used for both instruction following and code completion.
## Prompt template:
The best performing template is `Deepseek Coder` preset in your LM Studio.
This will format the prompt as follows:
```
You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science.",
### Instruction: {user_message}
### Response: {assistant_message}
```
The "official" template seems to tend towards generating Chinese, however if you'd like to use it you can set it up by choosing the `LM Studio Blank Preset` preset in your LM Studio and then:
Set your User Message Prefix to `User: `
Set your User Message Suffix to `\n\nAssistant: `
This will format the prompt as follows:
```
User: {user_message}
Assistant: {assistant_message}
```
## Technical Details
This model is an MoE architecture, using 16B total weights with only 2.4B activated to achieve excellent inference speed.
DeepSeek-Coder-V2 is based on the DeepSeek-V2 model, further trained on 6 trillion high quality coding tokens to enhance coding and mathematical reasoning.
It supports an incredible 128k context length.
For more details, read their paper here: https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
davin45/insta-sentiment-distill-roberta
|
davin45
| 2024-06-22T19:10:53Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T18:51:34Z |
---
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: insta-sentiment-distill-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# insta-sentiment-distill-roberta
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4161
- Accuracy: 0.823
- F1: 0.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4484 | 1.6 | 1000 | 0.4161 | 0.823 | 0.8229 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf
|
RichardErkhov
| 2024-06-22T19:06:03Z | 29 | 0 | null |
[
"gguf",
"arxiv:2205.14728",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T18:43:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
marathi-gpt-gemma-2b - GGUF
- Model creator: https://huggingface.co/l3cube-pune/
- Original model: https://huggingface.co/l3cube-pune/marathi-gpt-gemma-2b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [marathi-gpt-gemma-2b.Q2_K.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q2_K.gguf) | Q2_K | 1.08GB |
| [marathi-gpt-gemma-2b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [marathi-gpt-gemma-2b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [marathi-gpt-gemma-2b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [marathi-gpt-gemma-2b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [marathi-gpt-gemma-2b.Q3_K.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q3_K.gguf) | Q3_K | 1.29GB |
| [marathi-gpt-gemma-2b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [marathi-gpt-gemma-2b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [marathi-gpt-gemma-2b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [marathi-gpt-gemma-2b.Q4_0.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q4_0.gguf) | Q4_0 | 1.44GB |
| [marathi-gpt-gemma-2b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [marathi-gpt-gemma-2b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [marathi-gpt-gemma-2b.Q4_K.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q4_K.gguf) | Q4_K | 1.52GB |
| [marathi-gpt-gemma-2b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [marathi-gpt-gemma-2b.Q4_1.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q4_1.gguf) | Q4_1 | 1.56GB |
| [marathi-gpt-gemma-2b.Q5_0.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q5_0.gguf) | Q5_0 | 1.68GB |
| [marathi-gpt-gemma-2b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [marathi-gpt-gemma-2b.Q5_K.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q5_K.gguf) | Q5_K | 1.71GB |
| [marathi-gpt-gemma-2b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [marathi-gpt-gemma-2b.Q5_1.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q5_1.gguf) | Q5_1 | 1.79GB |
| [marathi-gpt-gemma-2b.Q6_K.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q6_K.gguf) | Q6_K | 1.92GB |
| [marathi-gpt-gemma-2b.Q8_0.gguf](https://huggingface.co/RichardErkhov/l3cube-pune_-_marathi-gpt-gemma-2b-gguf/blob/main/marathi-gpt-gemma-2b.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
license: cc-by-4.0
language: mr
widget:
# - text: <bos>\n### Instruction:\n(9+0)+(10+5)? 3 चरणांमध्ये सोडवा\n\n### Input:\n\n\n### Response:\n
- text: <bos>\n### Instruction:\nमहाराष्ट्राची राजधानी काय आहे?\n\n### Input:\n\n\n### Response:\n
---
## MahaGemma-2B
MahaGemma-2B is a Marathi Gemma model. It is a Gemma 2B (google/gemma-2b) model LoRA fine-tuned on translated Marathi datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
This is part of the MahaNLP initiative. More details coming soon. <br>
Prompt format:
```
<bos>\n### Instruction:\nमहाराष्ट्राची राजधानी काय आहे?\n\n### Input:\n\n\n### Response:\nमहाराष्ट्राची राजधानी मुंबई आहे
```
Citing
```
@article{joshi2022l3cube,
title={L3cube-mahanlp: Marathi natural language processing datasets, models, and library},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2205.14728},
year={2022}
}
```
Model Family: <br>
<a href="https://huggingface.co/l3cube-pune/marathi-gpt-gemma-2b"> MahaGemma-2B </a> <br>
<a href="https://huggingface.co/l3cube-pune/marathi-gpt-gemma-7b"> MahaGemma-7B </a>
|
CHE-72/Qwen1.5-4B-Chat-Q2_K-GGUF
|
CHE-72
| 2024-06-22T19:05:52Z | 76 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"base_model:quantized:Qwen/Qwen1.5-4B-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-22T19:05:44Z |
---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q2_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q2_K-GGUF --hf-file qwen1.5-4b-chat-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q2_K-GGUF --hf-file qwen1.5-4b-chat-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q2_K-GGUF --hf-file qwen1.5-4b-chat-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q2_K-GGUF --hf-file qwen1.5-4b-chat-q2_k.gguf -c 2048
```
|
RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf
|
RichardErkhov
| 2024-06-22T19:04:30Z | 39 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T18:49:06Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-1b-sft-MATH - GGUF
- Model creator: https://huggingface.co/realtreetune/
- Original model: https://huggingface.co/realtreetune/rho-1b-sft-MATH/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [rho-1b-sft-MATH.Q2_K.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q2_K.gguf) | Q2_K | 0.4GB |
| [rho-1b-sft-MATH.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [rho-1b-sft-MATH.IQ3_S.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [rho-1b-sft-MATH.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [rho-1b-sft-MATH.IQ3_M.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [rho-1b-sft-MATH.Q3_K.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q3_K.gguf) | Q3_K | 0.51GB |
| [rho-1b-sft-MATH.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [rho-1b-sft-MATH.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [rho-1b-sft-MATH.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [rho-1b-sft-MATH.Q4_0.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q4_0.gguf) | Q4_0 | 0.59GB |
| [rho-1b-sft-MATH.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [rho-1b-sft-MATH.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [rho-1b-sft-MATH.Q4_K.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q4_K.gguf) | Q4_K | 0.62GB |
| [rho-1b-sft-MATH.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [rho-1b-sft-MATH.Q4_1.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q4_1.gguf) | Q4_1 | 0.65GB |
| [rho-1b-sft-MATH.Q5_0.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q5_0.gguf) | Q5_0 | 0.71GB |
| [rho-1b-sft-MATH.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [rho-1b-sft-MATH.Q5_K.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q5_K.gguf) | Q5_K | 0.73GB |
| [rho-1b-sft-MATH.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [rho-1b-sft-MATH.Q5_1.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q5_1.gguf) | Q5_1 | 0.77GB |
| [rho-1b-sft-MATH.Q6_K.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q6_K.gguf) | Q6_K | 0.84GB |
| [rho-1b-sft-MATH.Q8_0.gguf](https://huggingface.co/RichardErkhov/realtreetune_-_rho-1b-sft-MATH-gguf/blob/main/rho-1b-sft-MATH.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf
|
RichardErkhov
| 2024-06-22T19:03:20Z | 9 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T18:57:34Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
smol_llama-220M-GQA - GGUF
- Model creator: https://huggingface.co/BEE-spoke-data/
- Original model: https://huggingface.co/BEE-spoke-data/smol_llama-220M-GQA/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [smol_llama-220M-GQA.Q2_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q2_K.gguf) | Q2_K | 0.09GB |
| [smol_llama-220M-GQA.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.IQ3_XS.gguf) | IQ3_XS | 0.1GB |
| [smol_llama-220M-GQA.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.IQ3_S.gguf) | IQ3_S | 0.1GB |
| [smol_llama-220M-GQA.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q3_K_S.gguf) | Q3_K_S | 0.1GB |
| [smol_llama-220M-GQA.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.IQ3_M.gguf) | IQ3_M | 0.1GB |
| [smol_llama-220M-GQA.Q3_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q3_K.gguf) | Q3_K | 0.11GB |
| [smol_llama-220M-GQA.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q3_K_M.gguf) | Q3_K_M | 0.11GB |
| [smol_llama-220M-GQA.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q3_K_L.gguf) | Q3_K_L | 0.11GB |
| [smol_llama-220M-GQA.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.IQ4_XS.gguf) | IQ4_XS | 0.12GB |
| [smol_llama-220M-GQA.Q4_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q4_0.gguf) | Q4_0 | 0.12GB |
| [smol_llama-220M-GQA.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.IQ4_NL.gguf) | IQ4_NL | 0.12GB |
| [smol_llama-220M-GQA.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q4_K_S.gguf) | Q4_K_S | 0.12GB |
| [smol_llama-220M-GQA.Q4_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q4_K.gguf) | Q4_K | 0.13GB |
| [smol_llama-220M-GQA.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q4_K_M.gguf) | Q4_K_M | 0.13GB |
| [smol_llama-220M-GQA.Q4_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q4_1.gguf) | Q4_1 | 0.13GB |
| [smol_llama-220M-GQA.Q5_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q5_0.gguf) | Q5_0 | 0.14GB |
| [smol_llama-220M-GQA.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q5_K_S.gguf) | Q5_K_S | 0.14GB |
| [smol_llama-220M-GQA.Q5_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q5_K.gguf) | Q5_K | 0.15GB |
| [smol_llama-220M-GQA.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q5_K_M.gguf) | Q5_K_M | 0.15GB |
| [smol_llama-220M-GQA.Q5_1.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q5_1.gguf) | Q5_1 | 0.16GB |
| [smol_llama-220M-GQA.Q6_K.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q6_K.gguf) | Q6_K | 0.17GB |
| [smol_llama-220M-GQA.Q8_0.gguf](https://huggingface.co/RichardErkhov/BEE-spoke-data_-_smol_llama-220M-GQA-gguf/blob/main/smol_llama-220M-GQA.Q8_0.gguf) | Q8_0 | 0.22GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- smol_llama
- llama2
datasets:
- JeanKaddour/minipile
- pszemraj/simple_wikipedia_LM
- mattymchen/refinedweb-3m
- BEE-spoke-data/knowledge-inoc-concat-v1
inference:
parameters:
max_new_tokens: 64
do_sample: true
temperature: 0.8
repetition_penalty: 1.05
no_repeat_ngram_size: 4
eta_cutoff: 0.0006
renormalize_logits: true
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made Destiny.
The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: The Harry Potter series, written by J.K. Rowling, begins with the book titled
example_title: Harry Potter Series
- text: 'Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:'
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
and another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine'
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
pipeline_tag: text-generation
model-index:
- name: smol_llama-220M-GQA
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 24.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 29.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.55
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
---
# smol_llama: 220M GQA
A small 220M param (total) decoder model. This is the first version of the model.
- 1024 hidden size, 10 layers
- GQA (32 heads, 8 key-value), context length 2048
- train-from-scratch on one GPU :)
## Links
[Here](https://huggingface.co/collections/BEE-spoke-data/finetuned-smol-220m-65998b080ae723e79c830f83) are some fine-tunes we did, but there are many more possibilities out there!
- instruct
- openhermes - [link](https://huggingface.co/BEE-spoke-data/smol_llama-220M-openhermes)
- open-instruct - [link](https://huggingface.co/BEE-spoke-data/smol_llama-220M-open_instruct)
- code
- python (pypi) - [link](https://huggingface.co/BEE-spoke-data/beecoder-220M-python)
- zephyr DPO tune
- SFT - [link](https://huggingface.co/BEE-spoke-data/zephyr-220m-sft-full)
- full DPO - [link](https://huggingface.co/BEE-spoke-data/zephyr-220m-dpo-full)
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-220M-GQA)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.44|
|AI2 Reasoning Challenge (25-Shot)|24.83|
|HellaSwag (10-Shot) |29.76|
|MMLU (5-Shot) |25.85|
|TruthfulQA (0-shot) |44.55|
|Winogrande (5-shot) |50.99|
|GSM8k (5-shot) | 0.68|
|
mradermacher/Secure-deepseek-coder-v2-MoE-GGUF
|
mradermacher
| 2024-06-22T19:02:33Z | 71 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Ferrag/Secure-deepseek-coder-v2-MoE",
"base_model:quantized:Ferrag/Secure-deepseek-coder-v2-MoE",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T18:03:47Z |
---
base_model: Ferrag/Secure-deepseek-coder-v2-MoE
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ferrag/Secure-deepseek-coder-v2-MoE
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q2_K.gguf) | Q2_K | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.IQ3_XS.gguf) | IQ3_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.IQ3_S.gguf) | IQ3_S | 7.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q3_K_S.gguf) | Q3_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.IQ3_M.gguf) | IQ3_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q3_K_M.gguf) | Q3_K_M | 8.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q3_K_L.gguf) | Q3_K_L | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.IQ4_XS.gguf) | IQ4_XS | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q4_K_S.gguf) | Q4_K_S | 9.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q4_K_M.gguf) | Q4_K_M | 10.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q5_K_S.gguf) | Q5_K_S | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q5_K_M.gguf) | Q5_K_M | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q6_K.gguf) | Q6_K | 14.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Secure-deepseek-coder-v2-MoE-GGUF/resolve/main/Secure-deepseek-coder-v2-MoE.Q8_0.gguf) | Q8_0 | 16.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CHE-72/Qwen1.5-4B-Chat-Q4_0-GGUF
|
CHE-72
| 2024-06-22T19:01:05Z | 4 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"base_model:quantized:Qwen/Qwen1.5-4B-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-22T19:00:50Z |
---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q4_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_0-GGUF --hf-file qwen1.5-4b-chat-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_0-GGUF --hf-file qwen1.5-4b-chat-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_0-GGUF --hf-file qwen1.5-4b-chat-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q4_0-GGUF --hf-file qwen1.5-4b-chat-q4_0.gguf -c 2048
```
|
MT-Distillation/s-bel-eng
|
MT-Distillation
| 2024-06-22T18:57:22Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"be",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-22T18:18:49Z |
---
license: mit
language:
- en
- be
pipeline_tag: translation
---
|
MT-Distillation/s-ukr-eng
|
MT-Distillation
| 2024-06-22T18:57:09Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"uk",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-22T18:08:15Z |
---
license: mit
language:
- en
- uk
pipeline_tag: translation
---
|
MT-Distillation/s-rus-eng
|
MT-Distillation
| 2024-06-22T18:56:55Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-22T18:19:11Z |
---
license: mit
language:
- en
- ru
pipeline_tag: translation
---
|
RichardErkhov/state-spaces_-_mamba-370m-hf-gguf
|
RichardErkhov
| 2024-06-22T18:56:54Z | 81 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T18:47:41Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mamba-370m-hf - GGUF
- Model creator: https://huggingface.co/state-spaces/
- Original model: https://huggingface.co/state-spaces/mamba-370m-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mamba-370m-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q2_K.gguf) | Q2_K | 0.2GB |
| [mamba-370m-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.IQ3_XS.gguf) | IQ3_XS | 0.23GB |
| [mamba-370m-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.IQ3_S.gguf) | IQ3_S | 0.23GB |
| [mamba-370m-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q3_K_S.gguf) | Q3_K_S | 0.23GB |
| [mamba-370m-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.IQ3_M.gguf) | IQ3_M | 0.23GB |
| [mamba-370m-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q3_K.gguf) | Q3_K | 0.23GB |
| [mamba-370m-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q3_K_M.gguf) | Q3_K_M | 0.23GB |
| [mamba-370m-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q3_K_L.gguf) | Q3_K_L | 0.23GB |
| [mamba-370m-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.IQ4_XS.gguf) | IQ4_XS | 0.26GB |
| [mamba-370m-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q4_0.gguf) | Q4_0 | 0.27GB |
| [mamba-370m-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.IQ4_NL.gguf) | IQ4_NL | 0.27GB |
| [mamba-370m-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q4_K_S.gguf) | Q4_K_S | 0.27GB |
| [mamba-370m-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q4_K.gguf) | Q4_K | 0.27GB |
| [mamba-370m-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q4_K_M.gguf) | Q4_K_M | 0.27GB |
| [mamba-370m-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q4_1.gguf) | Q4_1 | 0.28GB |
| [mamba-370m-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q5_0.gguf) | Q5_0 | 0.3GB |
| [mamba-370m-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q5_K_S.gguf) | Q5_K_S | 0.3GB |
| [mamba-370m-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q5_K.gguf) | Q5_K | 0.3GB |
| [mamba-370m-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q5_K_M.gguf) | Q5_K_M | 0.3GB |
| [mamba-370m-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q5_1.gguf) | Q5_1 | 0.32GB |
| [mamba-370m-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q6_K.gguf) | Q6_K | 0.34GB |
| [mamba-370m-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-370m-hf-gguf/blob/main/mamba-370m-hf.Q8_0.gguf) | Q8_0 | 0.42GB |
Original model description:
---
library_name: transformers
tags: []
---
# Mamba
<!-- Provide a quick summary of what the model is/does. -->
This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo.
# Usage
You need to install `transformers` from `main` until `transformers=4.39.0` is released.
```bash
pip install git+https://github.com/huggingface/transformers@main
```
We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using:
```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
```
If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used.
## Generation
You can use the classic `generate` API:
```python
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-370m-hf")
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-370m-hf")
>>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
>>> out = model.generate(input_ids, max_new_tokens=10)
>>> print(tokenizer.batch_decode(out))
["Hey how are you doing?\n\nI'm doing great.\n\nI"]
```
## PEFT finetuning example
In order to finetune using the `peft` library, we recommend keeping the model in float32!
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-370m-hf")
model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-370m-hf")
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
```
|
MT-Distillation/s-dan-eng
|
MT-Distillation
| 2024-06-22T18:56:30Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"da",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-22T18:27:14Z |
---
license: mit
language:
- en
- da
pipeline_tag: translation
---
|
MT-Distillation/s-eng-bel
|
MT-Distillation
| 2024-06-22T18:55:36Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"be",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-22T18:27:44Z |
---
license: mit
language:
- en
- be
pipeline_tag: translation
---
|
MT-Distillation/s-eng-dan
|
MT-Distillation
| 2024-06-22T18:55:13Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"da",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-22T18:28:59Z |
---
license: mit
language:
- en
- da
pipeline_tag: translation
---
|
MT-Distillation/s-eng-rus
|
MT-Distillation
| 2024-06-22T18:54:57Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-06-22T18:28:22Z |
---
license: mit
language:
- en
- ru
pipeline_tag: translation
---
|
CHE-72/Qwen1.5-4B-Chat-Q5_0-GGUF
|
CHE-72
| 2024-06-22T18:54:26Z | 7 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"base_model:quantized:Qwen/Qwen1.5-4B-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-22T18:54:14Z |
---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q5_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_0-GGUF --hf-file qwen1.5-4b-chat-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_0-GGUF --hf-file qwen1.5-4b-chat-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_0-GGUF --hf-file qwen1.5-4b-chat-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_0-GGUF --hf-file qwen1.5-4b-chat-q5_0.gguf -c 2048
```
|
whizzzzkid/test_sn9_6_2
|
whizzzzkid
| 2024-06-22T18:50:19Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T12:17:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF
|
CHE-72
| 2024-06-22T18:49:36Z | 30 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"base_model:quantized:Qwen/Qwen1.5-4B-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-22T18:49:22Z |
---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q5_K_M-GGUF --hf-file qwen1.5-4b-chat-q5_k_m.gguf -c 2048
```
|
CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF
|
CHE-72
| 2024-06-22T18:48:09Z | 5 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"base_model:quantized:Qwen/Qwen1.5-4B-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-22T18:47:54Z |
---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -c 2048
```
|
davin45/insta-sentiment-distil-bert
|
davin45
| 2024-06-22T18:44:59Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-22T18:34:54Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: insta-sentiment-distil-bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# insta-sentiment-distil-bert
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4678
- Accuracy: 0.7995
- F1: 0.7993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4586 | 1.6 | 1000 | 0.4678 | 0.7995 | 0.7993 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf
|
RichardErkhov
| 2024-06-22T18:39:19Z | 37 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T18:25:24Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-1.5B-Ita - GGUF
- Model creator: https://huggingface.co/DeepMount00/
- Original model: https://huggingface.co/DeepMount00/Qwen2-1.5B-Ita/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-1.5B-Ita.Q2_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2-1.5B-Ita.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2-1.5B-Ita.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2-1.5B-Ita.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2-1.5B-Ita.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2-1.5B-Ita.Q3_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2-1.5B-Ita.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2-1.5B-Ita.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2-1.5B-Ita.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2-1.5B-Ita.Q4_0.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2-1.5B-Ita.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2-1.5B-Ita.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2-1.5B-Ita.Q4_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2-1.5B-Ita.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2-1.5B-Ita.Q4_1.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2-1.5B-Ita.Q5_0.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2-1.5B-Ita.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2-1.5B-Ita.Q5_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2-1.5B-Ita.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2-1.5B-Ita.Q5_1.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2-1.5B-Ita.Q6_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q6_K.gguf) | Q6_K | 1.18GB |
| [Qwen2-1.5B-Ita.Q8_0.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Qwen2-1.5B-Ita-gguf/blob/main/Qwen2-1.5B-Ita.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
language:
- it
- en
license: apache-2.0
library_name: transformers
---
# Qwen2 1.5B: Almost the Same Performance as ITALIA (iGenius) but 6 Times Smaller 🚀
### Model Overview
**Model Name:** Qwen2 1.5B Fine-tuned for Italian Language
**Version:** 1.5b
**Model Type:** Language Model
**Parameter Count:** 1.5 billion
**Language:** Italian
**Comparable Model:** [ITALIA by iGenius](https://huggingface.co/iGeniusAI) (9 billion parameters)
### Model Description
Qwen2 1.5B is a compact language model specifically fine-tuned for the Italian language. Despite its relatively small size of 1.5 billion parameters, Qwen2 1.5B demonstrates strong performance, nearly matching the capabilities of larger models, such as the **9 billion parameter ITALIA model by iGenius**. The fine-tuning process focused on optimizing the model for various language tasks in Italian, making it highly efficient and effective for Italian language applications.
### Performance Evaluation
The performance of Qwen2 1.5B was evaluated on several benchmarks and compared against the ITALIA model. The results are as follows:
### Performance Evaluation
| Model | Parameters | Average | MMLU | ARC | HELLASWAG |
|:----------:|:----------:|:-------:|:-----:|:-----:|:---------:|
| ITALIA | 9B | 43.5 | 35.22 | **38.49** | **56.79** |
| Qwen2-1.5B-Ita | 1.5B | **43.98** | **51.45** | 32.34 | 48.15 |
### Conclusion
Qwen2 1.5B demonstrates that a smaller, more efficient model can achieve performance levels comparable to much larger models. It excels in the MMLU benchmark, showing its strength in multitask language understanding. While it scores slightly lower in the ARC and HELLASWAG benchmarks, its overall performance makes it a viable option for Italian language tasks, offering a balance between efficiency and capability.
|
LeoLearntoCode/llama-1.3b-16k
|
LeoLearntoCode
| 2024-06-22T18:34:44Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-21T08:21:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Treza12/pleasework
|
Treza12
| 2024-06-22T18:23:51Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-22T14:46:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gechim/XMLRoberta_Lexical_Dataset59KBoDuoi
|
gechim
| 2024-06-22T18:10:47Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T18:10:13Z |
---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: XMLRoberta_Lexical_Dataset59KBoDuoi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XMLRoberta_Lexical_Dataset59KBoDuoi
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6232
- Accuracy: 0.8988
- F1: 0.8992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:------:|
| No log | 0.2558 | 200 | 0.4593 | 0.7916 | 0.7880 |
| No log | 0.5115 | 400 | 0.3779 | 0.8222 | 0.8249 |
| No log | 0.7673 | 600 | 0.3462 | 0.8514 | 0.8497 |
| 0.4345 | 1.0230 | 800 | 0.3543 | 0.8554 | 0.8513 |
| 0.4345 | 1.2788 | 1000 | 0.3504 | 0.8573 | 0.8537 |
| 0.4345 | 1.5345 | 1200 | 0.3033 | 0.8767 | 0.8772 |
| 0.4345 | 1.7903 | 1400 | 0.2834 | 0.8778 | 0.8788 |
| 0.3071 | 2.0460 | 1600 | 0.3207 | 0.8671 | 0.8695 |
| 0.3071 | 2.3018 | 1800 | 0.2959 | 0.8822 | 0.8814 |
| 0.3071 | 2.5575 | 2000 | 0.2821 | 0.8778 | 0.8781 |
| 0.3071 | 2.8133 | 2200 | 0.3024 | 0.8872 | 0.8883 |
| 0.2523 | 3.0691 | 2400 | 0.2972 | 0.8888 | 0.8894 |
| 0.2523 | 3.3248 | 2600 | 0.2746 | 0.8883 | 0.8891 |
| 0.2523 | 3.5806 | 2800 | 0.2828 | 0.8909 | 0.8911 |
| 0.2523 | 3.8363 | 3000 | 0.2822 | 0.8941 | 0.8941 |
| 0.2177 | 4.0921 | 3200 | 0.2995 | 0.8898 | 0.8910 |
| 0.2177 | 4.3478 | 3400 | 0.2953 | 0.8887 | 0.8898 |
| 0.2177 | 4.6036 | 3600 | 0.2944 | 0.8925 | 0.8931 |
| 0.2177 | 4.8593 | 3800 | 0.3006 | 0.8957 | 0.8958 |
| 0.189 | 5.1151 | 4000 | 0.2816 | 0.8950 | 0.8955 |
| 0.189 | 5.3708 | 4200 | 0.2865 | 0.8956 | 0.8960 |
| 0.189 | 5.6266 | 4400 | 0.2794 | 0.8961 | 0.8966 |
| 0.189 | 5.8824 | 4600 | 0.2836 | 0.8980 | 0.8986 |
| 0.1637 | 6.1381 | 4800 | 0.3399 | 0.8949 | 0.8951 |
| 0.1637 | 6.3939 | 5000 | 0.3248 | 0.8952 | 0.8957 |
| 0.1637 | 6.6496 | 5200 | 0.3341 | 0.8976 | 0.8979 |
| 0.1637 | 6.9054 | 5400 | 0.2993 | 0.8962 | 0.8970 |
| 0.1388 | 7.1611 | 5600 | 0.3662 | 0.8967 | 0.8978 |
| 0.1388 | 7.4169 | 5800 | 0.3761 | 0.8962 | 0.8968 |
| 0.1388 | 7.6726 | 6000 | 0.3305 | 0.8953 | 0.8961 |
| 0.1388 | 7.9284 | 6200 | 0.3328 | 0.8966 | 0.8970 |
| 0.1193 | 8.1841 | 6400 | 0.3753 | 0.8980 | 0.8985 |
| 0.1193 | 8.4399 | 6600 | 0.3646 | 0.8974 | 0.8976 |
| 0.1193 | 8.6957 | 6800 | 0.3800 | 0.8963 | 0.8966 |
| 0.1193 | 8.9514 | 7000 | 0.3472 | 0.8980 | 0.8987 |
| 0.1059 | 9.2072 | 7200 | 0.3991 | 0.9002 | 0.9004 |
| 0.1059 | 9.4629 | 7400 | 0.4026 | 0.8967 | 0.8978 |
| 0.1059 | 9.7187 | 7600 | 0.3915 | 0.8983 | 0.8983 |
| 0.1059 | 9.9744 | 7800 | 0.3932 | 0.8997 | 0.8999 |
| 0.0923 | 10.2302 | 8000 | 0.4887 | 0.8939 | 0.8947 |
| 0.0923 | 10.4859 | 8200 | 0.4074 | 0.8977 | 0.8981 |
| 0.0923 | 10.7417 | 8400 | 0.3931 | 0.8998 | 0.9003 |
| 0.0806 | 10.9974 | 8600 | 0.4131 | 0.8955 | 0.8964 |
| 0.0806 | 11.2532 | 8800 | 0.4499 | 0.8963 | 0.8970 |
| 0.0806 | 11.5090 | 9000 | 0.4436 | 0.8999 | 0.9002 |
| 0.0806 | 11.7647 | 9200 | 0.4842 | 0.8965 | 0.8968 |
| 0.0697 | 12.0205 | 9400 | 0.4851 | 0.8961 | 0.8963 |
| 0.0697 | 12.2762 | 9600 | 0.5138 | 0.8999 | 0.9002 |
| 0.0697 | 12.5320 | 9800 | 0.5020 | 0.8963 | 0.8964 |
| 0.0697 | 12.7877 | 10000 | 0.5108 | 0.8929 | 0.8940 |
| 0.064 | 13.0435 | 10200 | 0.4893 | 0.8966 | 0.8968 |
| 0.064 | 13.2992 | 10400 | 0.5052 | 0.8973 | 0.8980 |
| 0.064 | 13.5550 | 10600 | 0.4917 | 0.8970 | 0.8971 |
| 0.064 | 13.8107 | 10800 | 0.5087 | 0.8965 | 0.8968 |
| 0.0571 | 14.0665 | 11000 | 0.5195 | 0.8970 | 0.8977 |
| 0.0571 | 14.3223 | 11200 | 0.5279 | 0.8932 | 0.8943 |
| 0.0571 | 14.5780 | 11400 | 0.5015 | 0.8974 | 0.8978 |
| 0.0571 | 14.8338 | 11600 | 0.5301 | 0.8961 | 0.8965 |
| 0.0538 | 15.0895 | 11800 | 0.5297 | 0.8951 | 0.8952 |
| 0.0538 | 15.3453 | 12000 | 0.5573 | 0.8976 | 0.8980 |
| 0.0538 | 15.6010 | 12200 | 0.5579 | 0.8955 | 0.8962 |
| 0.0538 | 15.8568 | 12400 | 0.5814 | 0.8969 | 0.8968 |
| 0.0481 | 16.1125 | 12600 | 0.5861 | 0.8972 | 0.8974 |
| 0.0481 | 16.3683 | 12800 | 0.5871 | 0.8968 | 0.8972 |
| 0.0481 | 16.6240 | 13000 | 0.5913 | 0.8978 | 0.8986 |
| 0.0481 | 16.8798 | 13200 | 0.6100 | 0.8957 | 0.8967 |
| 0.043 | 17.1355 | 13400 | 0.5895 | 0.8976 | 0.8982 |
| 0.043 | 17.3913 | 13600 | 0.5653 | 0.8978 | 0.8982 |
| 0.043 | 17.6471 | 13800 | 0.5914 | 0.8996 | 0.8999 |
| 0.043 | 17.9028 | 14000 | 0.5850 | 0.9005 | 0.9007 |
| 0.042 | 18.1586 | 14200 | 0.5927 | 0.8983 | 0.8988 |
| 0.042 | 18.4143 | 14400 | 0.6164 | 0.8997 | 0.8999 |
| 0.042 | 18.6701 | 14600 | 0.6324 | 0.8986 | 0.8992 |
| 0.042 | 18.9258 | 14800 | 0.6097 | 0.8996 | 0.9001 |
| 0.0383 | 19.1816 | 15000 | 0.6029 | 0.8985 | 0.8989 |
| 0.0383 | 19.4373 | 15200 | 0.6067 | 0.8988 | 0.8992 |
| 0.0383 | 19.6931 | 15400 | 0.6177 | 0.8987 | 0.8991 |
| 0.0383 | 19.9488 | 15600 | 0.6232 | 0.8988 | 0.8992 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
zhangfaen/Florence-2-large-ft
|
zhangfaen
| 2024-06-22T18:09:39Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2024-07-02T07:17:46Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-large-ft/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This is a copy of Microsoft's model with a few fixes. The PRs for the fixes are open on the original model but until they merge I'm using this one to have everything set up correctly.
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
do_sample=False,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf
|
RichardErkhov
| 2024-06-22T18:08:34Z | 146 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-22T17:53:39Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-1.5B-Instruct - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-1.5B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-1.5B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2-1.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2-1.5B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2-1.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2-1.5B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2-1.5B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2-1.5B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2-1.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2-1.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2-1.5B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2-1.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2-1.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2-1.5B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2-1.5B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2-1.5B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2-1.5B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2-1.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2-1.5B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2-1.5B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2-1.5B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2-1.5B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2-1.5B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2-1.5B-Instruct
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 1.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-1.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation
We briefly compare Qwen2-1.5B-Instruct with Qwen1.5-1.8B-Chat. The results are as follows:
| Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** |
| :--- | :---: | :---: | :---: | :---: |
| MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
| HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
| GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
| C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
| IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
|
chenxu0602/ner_twitter_fine_tune
|
chenxu0602
| 2024-06-22T18:06:42Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-22T16:07:42Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner_twitter_fine_tune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_twitter_fine_tune
This model is a fine-tuned version of [distilbert/distilbert-base-cased](https://huggingface.co/distilbert/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4211
- Precision: 0.6078
- Recall: 0.5901
- F1: 0.5988
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 56 | 0.3586 | 0.6358 | 0.5660 | 0.5989 | 0.9323 |
| No log | 2.0 | 112 | 0.3618 | 0.6069 | 0.5746 | 0.5903 | 0.9297 |
| No log | 3.0 | 168 | 0.3722 | 0.5956 | 0.6038 | 0.5997 | 0.9306 |
| No log | 4.0 | 224 | 0.3993 | 0.6060 | 0.5883 | 0.5970 | 0.9301 |
| No log | 5.0 | 280 | 0.4102 | 0.5411 | 0.6329 | 0.5834 | 0.9232 |
| No log | 6.0 | 336 | 0.4077 | 0.6097 | 0.5815 | 0.5953 | 0.9319 |
| No log | 7.0 | 392 | 0.4096 | 0.5858 | 0.6089 | 0.5971 | 0.9286 |
| No log | 8.0 | 448 | 0.4169 | 0.5975 | 0.5832 | 0.5903 | 0.9297 |
| 0.0111 | 9.0 | 504 | 0.4208 | 0.6064 | 0.5866 | 0.5963 | 0.9309 |
| 0.0111 | 10.0 | 560 | 0.4211 | 0.6078 | 0.5901 | 0.5988 | 0.9308 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0
- Tokenizers 0.15.2
|
mradermacher/Emo-AI-3B-GGUF
|
mradermacher
| 2024-06-22T18:05:05Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"sft",
"en",
"base_model:Klevin/Emo-AI-3B",
"base_model:quantized:Klevin/Emo-AI-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T17:55:49Z |
---
base_model: Klevin/Emo-AI-3B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Klevin/Emo-AI-3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.IQ3_XS.gguf) | IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.IQ3_S.gguf) | IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.IQ3_M.gguf) | IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Emo-AI-3B-GGUF/resolve/main/Emo-AI-3B.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
QuantFactory/Llama-3-Spellbound-Instruct-8B-0.3-GGUF
|
QuantFactory
| 2024-06-22T17:53:46Z | 27 | 0 | null |
[
"gguf",
"text-generation",
"base_model:hf-100/Llama-3-Spellbound-Instruct-8B-0.3",
"base_model:quantized:hf-100/Llama-3-Spellbound-Instruct-8B-0.3",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-22T12:06:23Z |
---
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
---
# QuantFactory/Llama-3-Spellbound-Instruct-8B-0.3-GGUF
This is quantized version of [hf-100/Llama-3-Spellbound-Instruct-8B-0.3](https://huggingface.co/hf-100/Llama-3-Spellbound-Instruct-8B-0.3) created using llama.cpp
# Model Description
## Llama-3 Spellbound Instruct Tuning-Free
## Updated Aspects
- Trained on additional tokens
- Improved mix of subject matter model was trained on
- Trained for 1.5M additional tokens
- Additional training on DPO dataset
## Model Rationale
Llama 3 is a strong base model with strong world understanding and creativity. Additional instruct finetuning trades that world understanding and creativity for instruction following that Llama doesn't require in order to adhere to most forms of roleplay.
This model was trained on unstructured text only, no instruct related fine-tuning was performed.
Made by [tryspellbound.com](https://tryspellbound.com).
*(tryspellbound.com does not currently use this model, it uses Claude 3 Sonnet.)*
## Features of this fine-tune for Llama 3:
- Roleplaying in multi-turn stories where the history is presented in a single message
- Dynamic switching of writing styles for different scenarios
- Interpretation of formatting marks 'quote' and 'action'
**Warning:** The underlying model, Llama 3, was trained on data that included adult content. This fine-tune does not add additional guardrails and is not suitable for all environments.
## Purpose of the Model
The main goal is to explore how presenting LLMs with history and instructions separately affects their performance, demonstrating:
- Improved coherence in long conversations
- Enhanced quality of character interactions
- Decreased instruction adherence, which could be improved with additional training
## Advanced prompting of the model
For advanced prompting, see [this document](https://rentry.co/ti936r2i)
|
QuantFactory/Mistral-Ita-7b-GGUF
|
QuantFactory
| 2024-06-22T17:51:39Z | 151 | 0 | null |
[
"gguf",
"text-generation-inference",
"text generation",
"text-generation",
"it",
"dataset:DeepMount00/llm_ita_ultra",
"base_model:DeepMount00/Mistral-Ita-7b",
"base_model:quantized:DeepMount00/Mistral-Ita-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T11:05:24Z |
---
language:
- it
license: apache-2.0
tags:
- text-generation-inference
- text generation
datasets:
- DeepMount00/llm_ita_ultra
pipeline_tag: text-generation
base_model: DeepMount00/Mistral-Ita-7b
---
# QuantFactory/Mistral-Ita-7b-GGUF
This is quantized version of [DeepMount00/Mistral-Ita-7b](https://huggingface.co/DeepMount00/Mistral-Ita-7b) created using llama.cpp
# Model Description
## Mistral-7B-v0.1 for Italian Language Text Generation
## Model Architecture
- **Base Model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Specialization:** Italian Language
## Evaluation
For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).
Here's a breakdown of the performance metrics:
| Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Accuracy Normalized** | 0.6731 | 0.5502 | 0.5364 | 0.5866 |
---
**Quantized 4-Bit Version Available**
A quantized 4-bit version of the model is available for use. This version offers a more efficient processing capability by reducing the precision of the model's computations to 4 bits, which can lead to faster performance and decreased memory usage. This might be particularly useful for deploying the model on devices with limited computational power or memory resources.
For more details and to access the model, visit the following link: [Mistral-Ita-7b-GGUF 4-bit version](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF).
---
## How to Use
How to utilize my Mistral for Italian text generation
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MODEL_NAME = "DeepMount00/Mistral-Ita-7b"
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16).eval()
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
def generate_answer(prompt):
messages = [
{"role": "user", "content": prompt},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=200, do_sample=True,
temperature=0.001, eos_token_id=tokenizer.eos_token_id)
decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return decoded[0]
prompt = "Come si apre un file json in python?"
answer = generate_answer(prompt)
print(answer)
```
---
## Developer
[Michele Montebovi]
|
QuantFactory/InstructLM-1.3B-GGUF
|
QuantFactory
| 2024-06-22T17:44:30Z | 23 | 4 | null |
[
"gguf",
"text-generation",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:instruction-pretrain/ft-instruction-synthesizer-collection",
"arxiv:2406.14491",
"arxiv:2309.09530",
"base_model:instruction-pretrain/InstructLM-1.3B",
"base_model:quantized:instruction-pretrain/InstructLM-1.3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T10:28:39Z |
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- instruction-pretrain/ft-instruction-synthesizer-collection
language:
- en
base_model: instruction-pretrain/InstructLM-1.3B
pipeline_tag: text-generation
---
# QuantFactory/InstructLM-1.3B-GGUF
This is quantized version of [instruction-pretrain/InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B) created using llama.cpp
# Model Description
## Instruction Pre-Training: Language Models are Supervised Multitask Learners
This repo contains the **general models pre-trained from scratch** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
</p>
## Resources
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
- General Models Pre-Trained from Scratch:
- [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
- [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B)
- Domain-Specific Models Pre-Trained from Llama3-8B:
- [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
- [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
## General Pre-Training From Scratch
We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch.
To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)
1. Setup dependencies:
```bash
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
2. Evalaute:
```bash
MODEL=instruction-pretrain/InstructLM-1.3B
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True
accelerate launch -m lm_eval --model hf \
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
--gen_kwargs do_sample=False \
--tasks piqa,hellaswag,winogrande \
--batch_size auto \
--num_fewshot 0
accelerate launch -m lm_eval --model hf \
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
--gen_kwargs do_sample=False \
--tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \
--batch_size auto \
--num_fewshot 5
```
## Model Citation
If you find our work helpful, please cite us:
[AdaptLLM](https://huggingface.co/papers/2309.09530)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```
|
silent666/Qwen-Qwen1.5-0.5B-1719078191
|
silent666
| 2024-06-22T17:43:12Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-06-22T17:43:11Z |
---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.