modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-10 18:30:15
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-10 18:29:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
grimjim/Llama-Nephilim-Metamorphosis-v1-8B
|
grimjim
| 2024-10-14T15:12:22Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:grimjim/llama-3-Nephilim-v3-8B",
"base_model:merge:grimjim/llama-3-Nephilim-v3-8B",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:merge:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-25T01:43:37Z |
---
base_model:
- meta-llama/Meta-Llama-3.1-8B-Instruct
- grimjim/llama-3-Nephilim-v3-8B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.1
pipeline_tag: text-generation
---
# Llama-Nephilim-Metamorphosis-v1-8B
These files comprise a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
A coherent Llama 3 model was merged at low weight into a Llama 3.1 model. No fine-tuning was performed afterward. The resulting model is mostly coherent for roleplay text generation, retaining long context capability of 3.1, despite none of the merge components having been specifically for roleplay nor intended for it. The coherence of this model may be surprising given the difference in tokenizers between 3 and 3.1. It is plausible that 3.1 was trained on a base which originated with 3, hence an internal mapping or feature existing that can translate internally between the two tokenizers.
Testing has been performed out to 16K context, using temperature 1 and minP 0.01. This model leans toward being creative, so adjust temperature upward or downward as desired.
There remain initial format consistency issues with the merged model, but this can be mitigated in an Instruct prompt. Additionally, promptsteering was employed to vary the text generation output to avoid some of the common failings observed during text generation with Llama 3/3.1 8B models. The complete Instruct prompt used during testing is available below.
- [context template](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/context_template/Llama%203.1%20Instruct%20Unleashed2.json)
- [instruct prompt](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/instruct_mode/Llama%203.1%20Instruct%20Unleashed2.json)
Built with Llama.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
* [grimjim/llama-3-Nephilim-v3-8B](https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
dtype: bfloat16
merge_method: slerp
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 32]
model: meta-llama/Meta-Llama-3.1-8B-Instruct
- layer_range: [0, 32]
model: grimjim/llama-3-Nephilim-v3-8B
parameters:
t:
- value: 0.1
```
|
nell123/phi-avg
|
nell123
| 2024-10-14T14:54:53Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2212.04089",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:merge:microsoft/Phi-3-mini-128k-instruct",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:merge:microsoft/Phi-3-mini-4k-instruct",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:merge:microsoft/Phi-3.5-mini-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T14:53:42Z |
---
base_model:
- microsoft/Phi-3-mini-128k-instruct
- microsoft/Phi-3-mini-4k-instruct
- microsoft/Phi-3.5-mini-instruct
library_name: transformers
tags:
- mergekit
- merge
---
# output-model-directory
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) as a base.
### Models Merged
The following models were included in the merge:
* [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
* [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: "microsoft/Phi-3.5-mini-instruct"
parameters:
weight: 1.0
- model: "microsoft/Phi-3-mini-4k-instruct"
parameters:
weight: 0.6
- model: "microsoft/Phi-3-mini-128k-instruct"
parameters:
weight: 0.3
base_model: "microsoft/Phi-3.5-mini-instruct"
merge_method: task_arithmetic
dtype: float16
```
|
transitionGap/RATIONCARD-IN-Llama3.1-8B-smallset
|
transitionGap
| 2024-10-14T14:51:58Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T14:26:23Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amiraMamdouh/finetuned_MLM__English_French_German
|
amiraMamdouh
| 2024-10-14T14:41:37Z | 114 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-14T12:51:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF
|
mradermacher
| 2024-10-14T14:37:06Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"base_model:quantized:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T13:14:27Z |
---
base_model: bunnycore/Phi-3.5-mini-TitanFusion-0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.3 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.3 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.3 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-TitanFusion-0.1-i1-GGUF/resolve/main/Phi-3.5-mini-TitanFusion-0.1.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
FINGU-AI/Qwen2.5_14B_Instruct_Fine_Tuned_v3
|
FINGU-AI
| 2024-10-14T14:32:28Z | 35 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T04:52:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf
|
RichardErkhov
| 2024-10-14T14:28:44Z | 17 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T03:52:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tess-2.0-Mixtral-8x7B - GGUF
- Model creator: https://huggingface.co/migtissera/
- Original model: https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Tess-2.0-Mixtral-8x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q2_K.gguf) | Q2_K | 16.12GB |
| [Tess-2.0-Mixtral-8x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [Tess-2.0-Mixtral-8x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [Tess-2.0-Mixtral-8x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K.gguf) | Q3_K | 21.0GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [Tess-2.0-Mixtral-8x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [Tess-2.0-Mixtral-8x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_0.gguf) | Q4_0 | 24.63GB |
| [Tess-2.0-Mixtral-8x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [Tess-2.0-Mixtral-8x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [Tess-2.0-Mixtral-8x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_K.gguf) | Q4_K | 26.49GB |
| [Tess-2.0-Mixtral-8x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [Tess-2.0-Mixtral-8x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_1.gguf) | Q4_1 | 27.32GB |
| [Tess-2.0-Mixtral-8x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_0.gguf) | Q5_0 | 30.02GB |
| [Tess-2.0-Mixtral-8x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [Tess-2.0-Mixtral-8x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_K.gguf) | Q5_K | 30.95GB |
| [Tess-2.0-Mixtral-8x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [Tess-2.0-Mixtral-8x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_1.gguf) | Q5_1 | 32.71GB |
| [Tess-2.0-Mixtral-8x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q6_K.gguf) | Q6_K | 35.74GB |
| [Tess-2.0-Mixtral-8x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
license: apache-2.0
---
# Join My General AI Discord (NeuroLattice):
https://discord.gg/Hz6GrwGFKD
# Tess-2.0-Mixtral
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral was trained on the mistralai/Mixtral-8x7B-v0.1 base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
<br>

<br>
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Tess-2.0-Mixtral"
output_file_path = "./conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
|
transitionGap/BIRTHCERT-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T14:20:29Z | 86 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T14:17:38Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
saishshinde15/Trail
|
saishshinde15
| 2024-10-14T14:20:17Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:18:30Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** saishshinde15
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
transitionGap/SUBSIDY-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T14:13:40Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T14:10:37Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mav23/Gemma-2-Ataraxy-v2-9B-GGUF
|
mav23
| 2024-10-14T14:09:17Z | 68 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:ifable/gemma-2-Ifable-9B",
"base_model:merge:ifable/gemma-2-Ifable-9B",
"base_model:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"base_model:merge:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T13:06:56Z |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- ifable/gemma-2-Ifable-9B
- jsgreenawalt/gemma-2-9B-it-advanced-v2.1
model-index:
- name: Gemma-2-Ataraxy-v2-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 21.36
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 39.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.83
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.88
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.79
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
---
# Gemma 2 Ataraxy v2 9B
Finally, after much testing, a sucessor to the first Gemma 2 Ataraxy 9B. Same kind of recipe, using the same principles, same concept as the last Ataraxy. It's not quite a better overall model, v1 is more well rounded, v2 is a little better at writing but has a little more slop and some other issues. consider this a sidegrade.

## GGUF / EXL2 Quants
Bartowski quants (imatrix): https://huggingface.co/bartowski/Gemma-2-Ataraxy-v2-9B-GGUF
Mradermacher quants (static): https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v2-9B-GGUF
Mradermacher quants (imatrix): https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v2-9B-i1-GGUF
Bartowski and mradermacher use different calibration data for their imatrix quants I believe, and the static quant of course uses none. Pick your poison.
More coming soon.
## Format
Use Gemma 2 format.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The following models were included in the merge:
* [ifable/gemma-2-Ifable-9B](https://huggingface.co/ifable/gemma-2-Ifable-9B)
* [jsgreenawalt/gemma-2-9B-it-advanced-v2.1](https://huggingface.co/jsgreenawalt/gemma-2-9B-it-advanced-v2.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: ifable/gemma-2-Ifable-9B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 42]
model: jsgreenawalt/gemma-2-9B-it-advanced-v2.1
- layer_range: [0, 42]
model: ifable/gemma-2-Ifable-9B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v2-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.16|
|IFEval (0-Shot) |21.36|
|BBH (3-Shot) |39.80|
|MATH Lvl 5 (4-Shot)| 0.83|
|GPQA (0-shot) |12.30|
|MuSR (0-shot) | 4.88|
|MMLU-PRO (5-shot) |35.79|
Second highest ranked open weight model in EQ-Bench.
|
mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF
|
mradermacher
| 2024-10-14T14:09:07Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T11:03:52Z |
---
base_model: ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 4.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 5.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 7.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 8.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q4_0.gguf) | i1-Q4_0 | 11.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q6_K.gguf) | i1-Q6_K | 16.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
arjunanand13/PALD_Florence-35e
|
arjunanand13
| 2024-10-14T14:01:25Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-14T14:00:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF
|
mradermacher
| 2024-10-14T14:01:07Z | 11 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3-8B-TitanFusion-Mix-2",
"base_model:quantized:bunnycore/Llama-3-8B-TitanFusion-Mix-2",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T07:50:59Z |
---
base_model: bunnycore/Llama-3-8B-TitanFusion-Mix-2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3-8B-TitanFusion-Mix-2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
madnesslab/Qwen2-VL-2B-Instruct-OpenVINO-INT4-SYM
|
madnesslab
| 2024-10-14T13:59:56Z | 5 | 0 | null |
[
"openvino",
"qwen2_vl",
"license:mit",
"region:us"
] | null | 2024-10-14T13:49:04Z |
---
license: mit
---
Original model: https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct
Quantitation documentation: https://docs.openvino.ai/nightly/notebooks/qwen2-vl-with-output.html
Quantitation config:
```python
import nncf
compression_configuration = {
"mode": nncf.CompressWeightsMode.INT4_SYM,
"group_size": 128,
"ratio": 1.0,
}
```
|
BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF
|
BoltMonkey
| 2024-10-14T13:56:25Z | 27 | 1 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralDaredevil-8B-abliterated",
"grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"llama-cpp",
"gguf-my-repo",
"base_model:BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated",
"base_model:quantized:BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-01T14:48:46Z |
---
license: llama3.1
base_model: BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralDaredevil-8B-abliterated
- grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
- llama-cpp
- gguf-my-repo
---
# NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF
This model was converted to GGUF format from [`BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated`](https://huggingface.co/BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated) space.
Refer to the [original model card](https://huggingface.co/BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated) for more details on the model.
The following GGUFs are available:
* [Q4_K_M](https://huggingface.co/BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF/resolve/main/neuraldaredevil-supernova7b-dareties-abliterated-q4_k_m.gguf?download=true) ...... **4.92GB**
* [Q5_K_M](https://huggingface.co/BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF/resolve/main/neuraldaredevil-supernova7b-dareties-abliterated-q5_k_m.gguf?download=true) ...... **5.73GB** _recommended_
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF --hf-file neuraldaredevil-supernova7b-dareties-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF --hf-file neuraldaredevil-supernova7b-dareties-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF --hf-file neuraldaredevil-supernova7b-dareties-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF --hf-file neuraldaredevil-supernova7b-dareties-abliterated-q5_k_m.gguf -c 2048
```
|
soulgrove/lis_LORA
|
soulgrove
| 2024-10-14T13:53:10Z | 7 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T12:55:00Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LoLiz
---
# Lis_Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LoLiz` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('soulgrove/lis_LORA', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
transitionGap/FARMSUBSIDY-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T13:52:34Z | 86 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:49:26Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lawrence/parler-tts-mini-v1-Lawrence-Ndizi_untransformed_r4
|
Lawrence
| 2024-10-14T13:50:07Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-14T13:49:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chendelong/DirectSAM-gen2-1024px-1014
|
chendelong
| 2024-10-14T13:42:23Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:41:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YALCINKAYA/opsgenius_s
|
YALCINKAYA
| 2024-10-14T13:34:30Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"trl",
"sft",
"reformyx",
"opsgenius",
"transformer",
"en",
"base_model:EleutherAI/gpt-neo-2.7B",
"base_model:quantized:EleutherAI/gpt-neo-2.7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-12T20:08:29Z |
---
license: apache-2.0
language: en
tags:
- trl
- sft
- reformyx
- opsgenius
- text-generation
- transformer
model_type: gpt_neo
base_model: EleutherAI/gpt-neo-2.7B # Updated to specify GPT-Neo 2.7B
pipeline_tag: text-generation
inference: true
interface: true
library_name: transformers
---
|
TinyLamda/llama-3.2-1b-4bit-128g
|
TinyLamda
| 2024-10-14T13:34:12Z | 17 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"license:llama3.2",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-10-14T11:43:38Z |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.2
extra_gated_prompt: >-
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
Llama 3.2 Version Release Date: September 25, 2024
“Agreement” means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2
distributed by Meta at https://llama.meta.com/doc/overview.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are
entering into this Agreement on such person or entity’s behalf), of the age required under
applicable laws, rules or regulations to provide legal consent and that has legal authority
to bind your employer or such other person or entity if you are entering in this Agreement
on their behalf.
“Llama 3.2” means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://www.llama.com/llama-downloads.
“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and
any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or Switzerland)
and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Meta’s intellectual property or other rights
owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof),
or a product or service (including another AI model) that contains any of them, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama”
on a related website, user interface, blogpost, about page, or product documentation. If you use the
Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama”
at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the
following attribution notice within a “Notice” text file distributed as a part of such copies:
“Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,
Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
incorporated by reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates,
is greater than 700 million monthly active users in the preceding calendar month, you must request
a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required
to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any
derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
California without regard to choice of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
any dispute arising out of this Agreement.
### Llama 3.2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”).
The most recent copy of this policy can be found at
[https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
#### Prohibited Uses
We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
9. Guns and illegal weapons (including weapon development)
10. Illegal drugs and regulated/controlled substances
11. Operation of critical infrastructure, transportation technologies, or heavy machinery
12. Self-harm or harm to others, including suicide, cutting, and eating disorders
13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
16. Generating, promoting, or further distributing spam
17. Impersonating another individual without consent, authorization, or legal right
18. Representing that the use of Llama 3.2 or outputs are human-generated
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with `transformers` and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Total | 830k | 86k | | 240 | 0 |
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 |
| Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 |
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 |
| | | Spanish | 41.5 | 55.1 | 62.5 |
| | | Italian | 39.8 | 53.8 | 61.6 |
| | | German | 39.2 | 53.3 | 60.6 |
| | | French | 40.5 | 54.6 | 62.3 |
| | | Hindi | 33.5 | 43.3 | 50.9 |
| | | Thai | 34.7 | 44.5 | 50.3 |
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf
|
RichardErkhov
| 2024-10-14T13:29:37Z | 43 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T05:45:41Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
HarmonicHarlequin_v5-20B - GGUF
- Model creator: https://huggingface.co/Elfrino/
- Original model: https://huggingface.co/Elfrino/HarmonicHarlequin_v5-20B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [HarmonicHarlequin_v5-20B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q2_K.gguf) | Q2_K | 11.47GB |
| [HarmonicHarlequin_v5-20B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.IQ3_XS.gguf) | IQ3_XS | 12.68GB |
| [HarmonicHarlequin_v5-20B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.IQ3_S.gguf) | IQ3_S | 13.4GB |
| [HarmonicHarlequin_v5-20B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q3_K_S.gguf) | Q3_K_S | 13.4GB |
| [HarmonicHarlequin_v5-20B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.IQ3_M.gguf) | IQ3_M | 14.18GB |
| [HarmonicHarlequin_v5-20B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q3_K.gguf) | Q3_K | 15.04GB |
| [HarmonicHarlequin_v5-20B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q3_K_M.gguf) | Q3_K_M | 15.04GB |
| [HarmonicHarlequin_v5-20B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q3_K_L.gguf) | Q3_K_L | 16.47GB |
| [HarmonicHarlequin_v5-20B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.IQ4_XS.gguf) | IQ4_XS | 16.67GB |
| [HarmonicHarlequin_v5-20B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q4_0.gguf) | Q4_0 | 17.5GB |
| [HarmonicHarlequin_v5-20B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.IQ4_NL.gguf) | IQ4_NL | 17.6GB |
| [HarmonicHarlequin_v5-20B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q4_K_S.gguf) | Q4_K_S | 17.62GB |
| [HarmonicHarlequin_v5-20B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q4_K.gguf) | Q4_K | 18.71GB |
| [HarmonicHarlequin_v5-20B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q4_K_M.gguf) | Q4_K_M | 18.71GB |
| [HarmonicHarlequin_v5-20B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q4_1.gguf) | Q4_1 | 19.43GB |
| [HarmonicHarlequin_v5-20B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q5_0.gguf) | Q5_0 | 21.36GB |
| [HarmonicHarlequin_v5-20B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q5_K_S.gguf) | Q5_K_S | 21.36GB |
| [HarmonicHarlequin_v5-20B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q5_K.gguf) | Q5_K | 21.98GB |
| [HarmonicHarlequin_v5-20B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q5_K_M.gguf) | Q5_K_M | 21.98GB |
| [HarmonicHarlequin_v5-20B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q5_1.gguf) | Q5_1 | 23.29GB |
| [HarmonicHarlequin_v5-20B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q6_K.gguf) | Q6_K | 25.46GB |
| [HarmonicHarlequin_v5-20B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Elfrino_-_HarmonicHarlequin_v5-20B-gguf/blob/main/HarmonicHarlequin_v5-20B.Q8_0.gguf) | Q8_0 | 32.97GB |
Original model description:
---
base_model:
- Undi95/PsyMedRP-v1-20B
library_name: transformers
tags:
- mergekit
- merge
---

***NOTES: Oops, it's actually a 33B**
***The 5th iteration of Harmonic Harlequin and seems the most coherent and lucid of the lot (early days though), need to test for creativity, might be a solid model, fingers crossed..***
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# The amount to attenuate the Q and K matrices of the *FIRST COPY* of each layer.
# NOTE: This scales the score matrix values by QK_ATTENUATION_FACTOR^2 (eg: sqrt(1/2)^2 = 1/2).
const_tag: &QK_ATTENUATION_FACTOR 0.84 # ≈ sqrt(1/2) <- This was changed, v4 was 0.7071067812 )
# The amount to scale the contribution to the residual stream (to hopefully reduce overshoot).
const_tag: &RESIDUAL_SCALE_FACTOR 0.71 # ≈ sqrt(1/2) <- This was changed, v4 was 0.7071067812 )
# Make the first copy *ONLY* take a more "bird's eye view" (ie: pay attention to more of the context).
model1-filter-env: &MODEL1_FILTER_ENV
parameters:
scale:
- filter: q_proj
value: *QK_ATTENUATION_FACTOR
- filter: k_proj
value: *QK_ATTENUATION_FACTOR
- filter: down_proj
value: *RESIDUAL_SCALE_FACTOR
- value: 1.0
# Make the scond copy pay attention to the context as before.
model2-filter-env: &MODEL2_FILTER_ENV
parameters:
scale:
- filter: down_proj
value: *RESIDUAL_SCALE_FACTOR
- value: 1.0
slices:
# The first 10 layers are not duplicated.
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [0, 10]
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [10, 11]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [10, 11]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [11, 12]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [11, 12]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [12, 13]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [12, 13]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [13, 14]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [13, 14]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [14, 15]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [14, 15]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [15, 16]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [15, 16]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [16, 17]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [16, 17]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [17, 18]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [17, 18]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [18, 19]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [18, 19]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [19, 20]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [19, 20]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [20, 21]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [20, 21]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [21, 22]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [21, 22]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [22, 23]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [22, 23]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [23, 24]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [23, 24]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [24, 25]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [24, 25]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [25, 26]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [25, 26]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [26, 27]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [26, 27]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [27, 28]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [27, 28]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [28, 29]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [28, 29]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [29, 30]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [29, 30]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [30, 31]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [30, 31]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [31, 32]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [31, 32]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [32, 33]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [32, 33]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [33, 34]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [33, 34]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [34, 35]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [34, 35]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [35, 36]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [35, 36]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [36, 37]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [36, 37]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [37, 38]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [37, 38]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [38, 39]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [38, 39]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [39, 40]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [39, 40]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [40, 41]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [40, 41]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [41, 42]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [41, 42]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [42, 43]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [42, 43]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [43, 44]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [43, 44]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [44, 45]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [44, 45]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [45, 46]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [45, 46]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [46, 47]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [46, 47]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [47, 48]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [47, 48]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [48, 49]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [48, 49]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [49, 50]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [49, 50]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [50, 51]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [50, 51]
<<: *MODEL2_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [51, 52]
<<: *MODEL1_FILTER_ENV
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [51, 52]
<<: *MODEL2_FILTER_ENV
# The last 10 layers are not duplicated.
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [52, 62]
merge_method: passthrough
dtype: float16
```
|
oma7777/finetunellama38b
|
oma7777
| 2024-10-14T13:24:16Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:19:48Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** oma7777
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
transitionGap/DRIVINGLICENE-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T13:23:44Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:19:57Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JuniperChinenye/xxss
|
JuniperChinenye
| 2024-10-14T13:23:38Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:21:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
transitionGap/ADHAR-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T13:15:54Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:12:28Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gurevichan/Carlos-Jones-3349770_001190000
|
gurevichan
| 2024-10-14T13:12:47Z | 123 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:11:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tejas-vaia/Renew_Cars_Resume
|
tejas-vaia
| 2024-10-14T13:11:59Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-14T13:07:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF
|
mradermacher
| 2024-10-14T13:11:08Z | 91 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:rombodawg/Rombos-LLM-V2.6-Qwen-14b",
"base_model:quantized:rombodawg/Rombos-LLM-V2.6-Qwen-14b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T08:27:13Z |
---
base_model: rombodawg/Rombos-LLM-V2.6-Qwen-14b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Rombos-LLM-V2.6-Qwen-14b-i1-GGUF/resolve/main/Rombos-LLM-V2.6-Qwen-14b.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Himanshu99001/classify-bluesky
|
Himanshu99001
| 2024-10-14T13:10:16Z | 119 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T13:10:12Z |
---
library_name: transformers
license: apache-2.0
base_model: albert/albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: classify-bluesky
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classify-bluesky
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0173
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
- Accuracy Label Bluesky: 1.0
- Accuracy Label Specific: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
shibing624/chinese-text-correction-1.5b
|
shibing624
| 2024-10-14T13:08:06Z | 1,080 | 6 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"conversational",
"zh",
"dataset:shibing624/chinese_text_correction",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-12T10:43:08Z |
---
library_name: transformers
base_model: Qwen/Qwen2.5-1.5B-Instruct
license: apache-2.0
datasets:
- shibing624/chinese_text_correction
language:
- zh
metrics:
- f1
tags:
- text-generation-inference
widget:
- text: "文本纠错:\n少先队员因该为老人让坐。"
---
# Chinese Text Correction Model
中文文本纠错模型chinese-text-correction-1.5b:用于拼写纠错、语法纠错
`shibing624/chinese-text-correction-1.5b` evaluate test data:
The overall performance of CSC **test**:
|input_text|predict_text|
|:--- |:--- |
|文本纠错:\n少先队员因该为老人让坐。|少先队员应该为老人让座。|
# Models
| Name | Base Model | Download |
|-----------------|-------------------|-----------------------------------------------------------------------|
| chinese-text-correction-1.5b | Qwen/Qwen2.5-1.5B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-1.5b) |
| chinese-text-correction-1.5b-lora | Qwen/Qwen2.5-1.5B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-1.5b-lora) |
| chinese-text-correction-7b | Qwen/Qwen2.5-7B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-7b) |
| chinese-text-correction-7b-lora | Qwen/Qwen2.5-7B-Instruct | [🤗 Hugging Face](https://huggingface.co/shibing624/chinese-text-correction-7b-lora) |
### 评估结果
- 评估指标:F1
- CSC(Chinese Spelling Correction): 拼写纠错模型,表示模型可以处理音似、形似、语法等长度对齐的错误纠正
- CTC(CHinese Text Correction): 文本纠错模型,表示模型支持拼写、语法等长度对齐的错误纠正,还可以处理多字、少字等长度不对齐的错误纠正
- GPU:Tesla V100,显存 32 GB
| Model Name | Model Link | Base Model | Avg | SIGHAN-2015 | EC-LAW | MCSC | GPU/CPU | QPS |
|:-----------------|:------------------------------------------------------------------------------------------------------------------------|:---------------------------|:-----------|:------------|:-------|:-------|:--------|:--------|
| Kenlm-CSC | [shibing624/chinese-kenlm-klm](https://huggingface.co/shibing624/chinese-kenlm-klm) | kenlm | 0.3409 | 0.3147 | 0.3763 | 0.3317 | CPU | 9 |
| Mengzi-T5-CSC | [shibing624/mengzi-t5-base-chinese-correction](https://huggingface.co/shibing624/mengzi-t5-base-chinese-correction) | mengzi-t5-base | 0.3984 | 0.7758 | 0.3156 | 0.1039 | GPU | 214 |
| ERNIE-CSC | [PaddleNLP/ernie-csc](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/legacy/examples/text_correction/ernie-csc) | PaddlePaddle/ernie-1.0-base-zh | 0.4353 | 0.8383 | 0.3357 | 0.1318 | GPU | 114 |
| MacBERT-CSC | [shibing624/macbert4csc-base-chinese](https://huggingface.co/shibing624/macbert4csc-base-chinese) | hfl/chinese-macbert-base | 0.3993 | 0.8314 | 0.1610 | 0.2055 | GPU | **224** |
| ChatGLM3-6B-CSC | [shibing624/chatglm3-6b-csc-chinese-lora](https://huggingface.co/shibing624/chatglm3-6b-csc-chinese-lora) | THUDM/chatglm3-6b | 0.4538 | 0.6572 | 0.4369 | 0.2672 | GPU | 3 |
| Qwen2.5-1.5B-CTC | [shibing624/chinese-text-correction-1.5b](https://huggingface.co/shibing624/chinese-text-correction-1.5b) | Qwen/Qwen2.5-1.5B-Instruct | 0.6802 | 0.3032 | 0.7846 | 0.9529 | GPU | 6 |
| Qwen2.5-7B-CTC | [shibing624/chinese-text-correction-7b](https://huggingface.co/shibing624/chinese-text-correction-7b) | Qwen/Qwen2.5-7B-Instruct | **0.8225** | 0.4917 | 0.9798 | 0.9959 | GPU | 3 |
## Usage (pycorrector)
本项目开源在`pycorrector`项目:[pycorrector](https://github.com/shibing624/pycorrector),可支持大模型微调后用于文本纠错,通过如下命令调用:
Install package:
```shell
pip install -U pycorrector
```
```python
from pycorrector.gpt.gpt_corrector import GptCorrector
if __name__ == '__main__':
error_sentences = [
'真麻烦你了。希望你们好好的跳无',
'少先队员因该为老人让坐',
'机七学习是人工智能领遇最能体现智能的一个分知',
'一只小鱼船浮在平净的河面上',
'我的家乡是有明的渔米之乡',
]
m = GptCorrector("shibing624/chinese-text-correction-1.5b")
batch_res = m.correct_batch(error_sentences)
for i in batch_res:
print(i)
print()
```
## Usage (HuggingFace Transformers)
Without [pycorrector](https://github.com/shibing624/pycorrector), you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install transformers
```
```python
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "shibing624/chinese-text-correction-1.5b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
input_content = "文本纠错:\n少先队员因该为老人让坐。"
messages = [{"role": "user", "content": input_content}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
print(input_text)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=1024, temperature=0, do_sample=False, repetition_penalty=1.08)
print(tokenizer.decode(outputs[0]))
```
output:
```shell
少先队员应该为老人让座。
```
模型文件组成:
```
shibing624/chinese-text-correction-1.5b
|-- added_tokens.json
|-- config.json
|-- generation_config.json
|-- merges.txt
|-- model.safetensors
|-- model.safetensors.index.json
|-- README.md
|-- special_tokens_map.json
|-- tokenizer_config.json
|-- tokenizer.json
`-- vocab.json
```
#### 训练参数:
- num_epochs: 8
- batch_size: 4
- steps: 36000
- eval_loss: 0.14
- base model: Qwen/Qwen2.5-1.5B-Instruct
- train data: [shibing624/chinese_text_correction](https://huggingface.co/datasets/shibing624/chinese_text_correction)
- train time: 9 days 8 hours
- eval_loss: 
- train_loss: 
### 训练数据集
#### 中文纠错数据集
- 数据:[shibing624/chinese_text_correction](https://huggingface.co/datasets/shibing624/chinese_text_correction)
如果需要训练Qwen的纠错模型,请参考[https://github.com/shibing624/pycorrector](https://github.com/shibing624/pycorrector) 或者 [https://github.com/shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
## Citation
```latex
@software{pycorrector,
author = {Xu Ming},
title = {pycorrector: Implementation of language model finetune},
year = {2024},
url = {https://github.com/shibing624/pycorrector},
}
```
|
transitionGap/RERA-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T13:07:05Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:03:56Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asif00/bangla-llama-1B-gguf-16bit
|
asif00
| 2024-10-14T13:07:01Z | 63 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"question-answering",
"bn",
"dataset:OdiaGenAI/all_combined_bengali_252k",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-09-26T23:42:35Z |
---
language:
- bn
license: apache-2.0
tags:
- text-generation-inference
- transformers
- llama
- gguf
datasets:
- OdiaGenAI/all_combined_bengali_252k
base_model:
- meta-llama/Llama-3.2-1B
pipeline_tag: question-answering
---
# Bangla LLaMA GGUF 1B-16bit
**Bangla LLaMA GGUF** is a 1-billion-parameter language model optimized for Bengali-language tasks such as context-based question answering and retrieval-augmented generation. It is derived from **LLaMA 3.2 1B** and trained on the [OdiaGenAI/all_combined_bengali_252k](https://huggingface.co/datasets/OdiaGenAI/all_combined_bengali_252k) dataset.
## Features
- **Model Size:** 1B parameters
- **Format:** GGUF (16-bit)
- **Language:** Bengali
- **Use Cases:**
- Context-based Question Answering
- Bengali Retrieval-Augmented Generation
- **Integration:** Compatible with `llama.cpp` and Hugging Face `transformers` (with conversion)
## Usage
### 1. Using with `llama.cpp`
#### Setup
```bash
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
```
#### Run Inference
```bash
./main -m path/to/asif00/bangla-llama-1B-gguf-16bit.gguf -p "আপনার প্রশ্ন এখানে"
```
### 2. Using with Hugging Face Transformers
**Note:** GGUF format is not directly supported by `transformers`. Conversion to a compatible format is required.
#### Prerequisites
```bash
pip install transformers accelerate
```
#### Example Script
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load tokenizer and model (after conversion)
tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-1B")
model = AutoModelForCausalLM.from_pretrained("path/to/converted-model")
prompt = """
নিচের নির্দেশনা বাংলা ভাষায় যা একটি কাজ বর্ণনা করে, এবং ইনপুটও বাংলা ভাষায় যা অতিরিক্ত প্রসঙ্গ প্রদান করে। উপযুক্তভাবে অনুরোধ পূরণ করে বাংলা ভাষায় একটি প্রতিক্রিয়া লিখুন।
### নির্দেশনা:
{}
### ইনপুট:
{}
### প্রতিক্রিয়া:
"""
def generate_response(instruction, context):
input_text = prompt.format(instruction, context)
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_length=512, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(output[0], skip_special_tokens=True)
response = response.split("### প্রতিক্রিয়া:")[-1].strip()
return response
# Example
instruction = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর সম্পর্কে একটি সংক্ষিপ্ত বিবরণ দিন।"
context = "মহাশ্বেতা দেবী ২০১৬ সালে হৃদরোগে আক্রান্ত হয়ে কলকাতায় মৃত্যুবরণ করেন।"
print(generate_response(instruction, context))
```
## Example
```python
question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
context = (
"২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। "
"সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
)
answer = generate_response(question, context)
print("উত্তর:", answer)
```
**Output:**
```
উত্তর: মহাশ্বেতা দেবী ২৮ জুলাই ২০১৬ সালে মৃত্যুবরণ করেন।
```
## Limitations
- **Dataset Size:** Trained on a limited dataset, which may affect response accuracy.
- **Factuality:** May generate incorrect or nonsensical answers.
- **Language Support:** Primarily optimized for Bengali; performance may vary for other languages.
## Disclaimer
The **Bangla LLaMA GGUF** model's performance is contingent on the quality and diversity of the training data. Users should verify the information generated, especially for critical applications.
## Additional Resources
- **llama.cpp Repository:** [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
- **Hugging Face Dataset:** [OdiaGenAI/all_combined_bengali_252k](https://huggingface.co/datasets/OdiaGenAI/all_combined_bengali_252k)
- **Model Page:** [asif00/bangla-llama-1B-gguf-16bit](https://huggingface.co/asif00/bangla-llama-1B-gguf-16bit)
|
Edoigtrd/T5-nl2bash
|
Edoigtrd
| 2024-10-14T13:06:12Z | 132 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"code",
"en",
"dataset:Edoigtrd/tldr-pages",
"license:cc-by-nc-sa-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-04T17:06:18Z |
---
license: cc-by-nc-sa-2.0
datasets:
- Edoigtrd/tldr-pages
language:
- en
library_name: transformers
tags:
- code
---
# T5-nl2bash
T5-nl2bash is a model that generates bash commands from natural language descriptions.
This repository contains the trained model, fine-tuned from [Flan-T5 base](https://huggingface.co/google/flan-t5-base).
# Training data
- Tldr.sh pages - [tldr-dataset](https://huggingface.co/datasets/Edoigtrd/tldr-pages)
- nl2bash - [nl2bash](https://github.com/TellinaTool/nl2bash)
# Model
The model is fine-tuned from the [Flan-T5 base](https://huggingface.co/google/flan-t5-base) model.
It has been trained on a single Nvidia L4 from Scaleway, for 4 hours and 4 epochs.
The model aims to generate bash commands from natural language descriptions.
# Results
> The results are evaluated on an 500 eval subset of the data cited above.
| Metric | Flan-T5 - Nl2bash |
|--------------------|------------|
| exact | 0.064 |
| rouge | 0.530054 |
| bleu | 0.177511 |
| levenstein average | 0.612479 |
| levenstein median | 0.608696 |
# Examples :
| Task | Command |
|------|---------|
| Count time, calls, and errors for each system call and report a summary on program exit | `slurmctl -c path/to/file` |
| Search for a word in all the normal/regular files in the current folder | `find . -type f -exec grep -H` |
| Start the daemon with verbose output coming from specific files in the bluetoothd or plugins source | `bluetoothd-start --verbose path/to/file` |
| Find all directories under current directory and make them read, write, and executable for owner & group and remove read-write-execute permission for others | `find . -type d -exec chmod ug=` |
| Set the verbosity level (error, warn, info, debug, or trace) | `svn -v verbosity` |
| Show the configuration of a virtual machine, specifying its ID | `qm show vm_id` |
| Import VPN configuration | `ipconfig import config` |
| Find all the files ending with ".coffee" in the current folder and search for the words "re" in each line | `find . -name "*.coffee" -exec grep` |
| Run a program from a file | `pstool --run path/to/file` |
| Display details and metadata about a specified portable service image | `portablectl show-image path/to/image` |
| Display all the files in the current folder excluding those which are in the path of ".git" | `find . -path '*.git' -prune -` |
| Find all *.ogg files on the system ignoring the case | `find / -iname "*.ogg"` |
| Print and recursively remove the alphabetically last directory in the current directory | `rmdir -rf` |
# License :
[Creative Commons cc-by-nc-sa 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|
FIMH/sd-class-butterflies-32
|
FIMH
| 2024-10-14T13:05:56Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-10-14T13:00:09Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('FIMH/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
arjunanand13/PALD_Florence-20e
|
arjunanand13
| 2024-10-14T13:01:53Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:01:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CheeLi03/whisper-tiny-pt-5000
|
CheeLi03
| 2024-10-14T13:01:19Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:fleurs",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-14T10:19:55Z |
---
base_model: openai/whisper-tiny
datasets:
- fleurs
language:
- pt
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Tiny Portuguese 5000 - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: pt_br
split: None
args: 'config: pt split: test'
metrics:
- type: wer
value: 102.8207418551079
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Portuguese 5000 - Chee Li
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6510
- Wer: 102.8207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 625
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.1445 | 5.0251 | 1000 | 0.5040 | 109.3037 |
| 0.0131 | 10.0503 | 2000 | 0.5788 | 110.2628 |
| 0.0043 | 15.0754 | 3000 | 0.6183 | 112.4207 |
| 0.0027 | 20.1005 | 4000 | 0.6429 | 109.2708 |
| 0.0022 | 25.1256 | 5000 | 0.6510 | 102.8207 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ykatada/distilbert-base-uncased-finetuned-fake-or-real-news
|
ykatada
| 2024-10-14T13:01:03Z | 197 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T09:36:17Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-fake-or-real-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-fake-or-real-news
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf
|
RichardErkhov
| 2024-10-14T13:00:02Z | 5 | 0 | null |
[
"gguf",
"arxiv:2405.03548",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T02:08:02Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MAmmoTH2-8x7B - GGUF
- Model creator: https://huggingface.co/TIGER-Lab/
- Original model: https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MAmmoTH2-8x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q2_K.gguf) | Q2_K | 16.12GB |
| [MAmmoTH2-8x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [MAmmoTH2-8x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [MAmmoTH2-8x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [MAmmoTH2-8x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [MAmmoTH2-8x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K.gguf) | Q3_K | 21.0GB |
| [MAmmoTH2-8x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [MAmmoTH2-8x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [MAmmoTH2-8x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [MAmmoTH2-8x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_0.gguf) | Q4_0 | 24.63GB |
| [MAmmoTH2-8x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [MAmmoTH2-8x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [MAmmoTH2-8x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_K.gguf) | Q4_K | 26.49GB |
| [MAmmoTH2-8x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [MAmmoTH2-8x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_1.gguf) | Q4_1 | 27.32GB |
| [MAmmoTH2-8x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_0.gguf) | Q5_0 | 30.02GB |
| [MAmmoTH2-8x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [MAmmoTH2-8x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_K.gguf) | Q5_K | 30.95GB |
| [MAmmoTH2-8x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [MAmmoTH2-8x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_1.gguf) | Q5_1 | 32.71GB |
| [MAmmoTH2-8x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q6_K.gguf) | Q6_K | 35.74GB |
| [MAmmoTH2-8x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
license: mit
language:
- en
datasets:
- TIGER-Lab/WebInstructSub
metrics:
- accuracy
library_name: transformers
---
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------|
| **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 |
| **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 |
| **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval.
## Chat Format
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
But we also found that the model is not very sensitive to the chat template.
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
```
|
Malekhmem/MalekHmemMistral
|
Malekhmem
| 2024-10-14T12:59:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-14T08:24:28Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jancd/qiniu_by-unsloth_Llama-3.2-1B-Instruct-bnb-4bit
|
jancd
| 2024-10-14T12:58:32Z | 29 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T12:57:42Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jancd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
transitionGap/Qwen2.5-0.5B-bnb-4bit-rera-small
|
transitionGap
| 2024-10-14T12:47:52Z | 87 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T12:42:35Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TinyLamda/gpt2-medium-4bit-128g
|
TinyLamda
| 2024-10-14T12:45:50Z | 7 | 0 | null |
[
"gpt2",
"en",
"arxiv:1910.09700",
"license:mit",
"4-bit",
"gptq",
"region:us"
] | null | 2024-10-14T11:49:59Z |
---
language: en
license: mit
---
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"},
{'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"},
{'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a military'},
{'generated_text': 'The man worked as a salesman in Mexico and eventually'},
{'generated_text': 'The man worked as a supervisor at the department for'},
{'generated_text': 'The man worked as a cleaner for the same corporation'},
{'generated_text': 'The man worked as a barman and was involved'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a social worker in a children'},
{'generated_text': 'The woman worked as a marketing manager, and her'},
{'generated_text': 'The woman worked as a customer service agent in a'},
{'generated_text': 'The woman worked as a cleaner for the same corporation'},
{'generated_text': 'The woman worked as a barista and was involved'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
|
CheeLi03/whisper-tiny-zh
|
CheeLi03
| 2024-10-14T12:41:56Z | 14 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"zh",
"dataset:fleurs",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-14T10:18:38Z |
---
base_model: openai/whisper-tiny
datasets:
- fleurs
language:
- zh
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Tiny Chinese - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: cmn_hans_cn
split: None
args: 'config: zh split: test'
metrics:
- type: wer
value: 38.568340285601195
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Chinese - Chee Li
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5500
- Wer: 38.5683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.182 | 4.3668 | 1000 | 0.4832 | 42.5418 |
| 0.0473 | 8.7336 | 2000 | 0.5039 | 38.0568 |
| 0.0121 | 13.1004 | 3000 | 0.5371 | 40.1699 |
| 0.0079 | 17.4672 | 4000 | 0.5500 | 38.5683 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
doktor47/zinemind_msft_16temp
|
doktor47
| 2024-10-14T12:41:08Z | 187 | 0 |
transformers
|
[
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-10-14T12:40:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
|
BoltMonkey
| 2024-10-14T12:28:28Z | 77 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralDaredevil-8B-abliterated",
"grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"conversational",
"base_model:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:merge:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"license:llama3.1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-01T07:44:17Z |
---
library_name: transformers
base_model:
- mlabonne/NeuralDaredevil-8B-abliterated
- grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
model-index:
- name: NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 79.99
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.27
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.14
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.47
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.37
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
name: Open LLM Leaderboard
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralDaredevil-8B-abliterated
- grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
license: llama3.1
pipeline_tag: text-generation
---
# NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated
NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated)
* [grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B](https://huggingface.co/grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B)
Quantised versions of this model are available in GGUF format from [here](https://huggingface.co/BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF)
Or use the following direct links:
* [Q5_K_M](https://huggingface.co/BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF/resolve/main/neuraldaredevil-supernova7b-dareties-abliterated-q5_k_m.gguf?download=true)
* [Q4_K_M](https://huggingface.co/BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-abliterated-GGUF/resolve/main/neuraldaredevil-supernova7b-dareties-abliterated-q4_k_m.gguf?download=true)
## open-llm-leaderboard results
| Average | IFEval | BBH |MATH Lvl 5| GPQA | MUSR |MMLU-PRO| |
|-----------:|-----------:|-----------:|---------:|-----------:|-----------:|-------:|-------------------------------------------------------------------------------------------------|
| 27.5 | 79.99 | 30.76 | 10.27 | 4.14 | 9.47 | 30.37 | [🤗 Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) |
## 🧩 Configuration
```yaml
models:
- model: NousResearch/Meta-Llama-3.1-8B-Instruct
- model: mlabonne/NeuralDaredevil-8B-abliterated
parameters:
density: 0.53
weight: 0.55
- model: grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
parameters:
density: 0.53
weight: 0.45
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "BoltMonkey/NeuralDaredevil-SuperNova-Lite-7B-DARETIES-ablorabliterated"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
arjunanand13/PALD_Florence-10e
|
arjunanand13
| 2024-10-14T12:22:15Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-14T12:21:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MikhailDub/distilbert-finetune
|
MikhailDub
| 2024-10-14T12:21:35Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T08:48:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sprenkamp/BERTopic_UKR_CH
|
sprenkamp
| 2024-10-14T12:13:45Z | 5 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-10-14T12:10:29Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# BERTopic_UKR_CH
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sprenkamp/BERTopic_UKR_CH")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 267 #Please note that after training, we manually assessed all clusters and merged similar ones leading to a total of 17 distinct clusters.
* Number of training documents: 550677
<details>
<summary>Click here for an overview of all topics.</summary>
The following Python code uses a dictionary to map the 267 clusters found by algorithm to the 17 distinct clusters we identified by qualitative analysis.
```python
topic_mapping = {-1: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 0: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Information Requests'}, 1: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 2: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Insurance'}, 3: {'cluster_id': 2, 'cluster_name': 'Pet', 'sub_cluster': 'Pet'}, 4: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Asylum'}, 5: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Ticket Inquiries'}, 6: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 7: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 8: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 9: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}, 10: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 11: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Translation Services'}, 12: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Passport'}, 13: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Dentistry'}, 14: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 15: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Currency'}, 16: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Banking'}, 17: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Protocols'}, 18: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Mail'}, 19: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 20: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Clothing'}, 21: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Financial Assistance'}, 22: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 23: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 24: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 25: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 26: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 27: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 28: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Leasing Regulation'}, 29: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 30: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Open Chat'}, 31: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 32: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 33: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 34: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Food'}, 35: {'cluster_id': 2, 'cluster_name': 'Pet', 'sub_cluster': 'Pet'}, 36: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 37: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Vehicle'}, 38: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 39: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 40: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 41: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 42: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Status Acquisition'}, 43: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Consulate Services'}, 44: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 45: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 46: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}, 47: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 48: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Vehicle'}, 49: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 50: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 51: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'War Chat'}, 52: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 53: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 54: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 55: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 56: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 57: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Asylum'}, 58: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 59: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 60: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Tax'}, 61: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Expense'}, 62: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 63: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 64: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 65: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 66: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 67: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 68: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 69: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Family Reunion'}, 70: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 71: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 72: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 73: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 74: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 75: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Vaccinations'}, 76: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Police'}, 77: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Financial Assistance'}, 78: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 79: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 80: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 81: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 82: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 83: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Parking'}, 84: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 85: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 86: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 87: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Legal information'}, 88: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 89: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 90: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Insurance'}, 91: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 92: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 93: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 94: {'cluster_id': 12, 'cluster_name': 'Religious Information', 'sub_cluster': 'Religious Information'}, 95: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 96: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 97: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 98: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 99: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 100: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Banking'}, 101: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 102: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 103: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Library'}, 104: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Library'}, 105: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Tax'}, 106: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Police'}, 107: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 108: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 109: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 110: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Legal information'}, 111: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Passport'}, 112: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 113: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 114: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 115: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 116: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 117: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 118: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 119: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 120: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 121: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 122: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Translation Services'}, 123: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Insurance'}, 124: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 125: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Legal information'}, 126: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 127: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 128: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 129: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Psychotherapy'}, 130: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 131: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 132: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 133: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 134: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 135: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Home Appliances'}, 136: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 137: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 138: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 139: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Tax'}, 140: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Vaccinations'}, 141: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 142: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 143: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 144: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 145: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 146: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 147: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 148: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Vehicle'}, 149: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 150: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 151: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 152: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 153: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 154: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 155: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 156: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 157: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 158: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 159: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 160: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 161: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 162: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 163: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 164: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 165: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 166: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 167: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 168: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 169: {'cluster_id': 12, 'cluster_name': 'Religious Information', 'sub_cluster': 'Religious Information'}, 170: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 171: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Taxi Services'}, 172: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 173: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 174: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 175: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Open Chat'}, 176: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 177: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 178: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 179: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 180: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 181: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 182: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Divorce'}, 183: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 184: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Protocols'}, 185: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 186: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 187: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 188: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Marriage'}, 189: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 190: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 191: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 192: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 193: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 194: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 195: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 196: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 197: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 198: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 199: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}, 200: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 201: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Logistics'}, 202: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 203: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Consulate Services'}, 204: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 205: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Other Item Request'}, 206: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Leasing Regulation'}, 207: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Other Item Request'}, 208: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 209: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 210: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 211: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 212: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 213: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Infant & Toddler Care'}, 214: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 215: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 216: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 217: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 218: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 219: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 220: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 221: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 222: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Hospice Care'}, 223: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 224: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 225: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 226: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 227: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Dentistry'}, 228: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 229: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 230: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Customs'}, 231: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 232: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 233: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Customs'}, 234: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Customs'}, 235: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Disability'}, 236: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 237: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 238: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 239: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 240: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 241: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 242: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 243: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 244: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 245: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 246: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 247: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 248: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 249: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 250: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 251: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 252: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 253: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Open Chat'}, 254: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 255: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 256: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 257: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 258: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 259: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Protocols'}, 260: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 261: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 262: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 263: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Infant & Toddler Care'}, 264: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 265: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}}
df['cluster_id_fit'] = df['predicted_class_old'].map(lambda x: topic_mapping[x]['cluster_id'])
df['predicted_class'] = df['predicted_class_old'].map(lambda x: topic_mapping[x]['cluster_name'])
df['sub_cluster'] = df['predicted_class_old'].map(lambda x: topic_mapping[x]['sub_cluster'])
```
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: auto
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.24.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.4
* Pandas: 2.0.3
* Scikit-Learn: 1.3.1
* Sentence-transformers: 2.2.2
* Transformers: 4.34.0
* Numba: 0.58.0
* Plotly: 5.17.0
* Python: 3.8.10
|
arjunanand13/Florence-enphase-4.44.0
|
arjunanand13
| 2024-10-14T12:07:49Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T11:14:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Prasad12344321/Qwen2.5-0.5B-bnb-4bit-python
|
Prasad12344321
| 2024-10-14T12:07:02Z | 20 | 0 | null |
[
"safetensors",
"gguf",
"qwen2",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-26T17:38:46Z |
---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
BVRA/MegaDescriptor-L-384
|
BVRA
| 2024-10-14T12:06:05Z | 403 | 5 |
wildlife-datasets
|
[
"wildlife-datasets",
"pytorch",
"image-classification",
"ecology",
"animals",
"re-identification",
"arxiv:2103.14030",
"arxiv:2304.07193",
"license:cc-by-nc-4.0",
"region:us"
] |
image-classification
| 2023-09-27T11:45:12Z |
---
tags:
- image-classification
- ecology
- animals
- re-identification
library_name: wildlife-datasets
license: cc-by-nc-4.0
---
# Model card for MegaDescriptor-L-384
A Swin-L image feature model. Superwisely pre-trained on animal re-identification datasets.
## Model Details
- **Model Type:** Animal re-identification / feature backbone
- **Model Stats:**
- Params (M): 228.8
- Image size: 384 x 384
- Architecture: swin_large_patch4_window12_384
- **Paper:** [WildlifeDatasets_An_Open-Source_Toolkit_for_Animal_Re-Identification](https://openaccess.thecvf.com/content/WACV2024/html/Cermak_WildlifeDatasets_An_Open-Source_Toolkit_for_Animal_Re-Identification_WACV_2024_paper.html)
- **Related Papers:**
- [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
- [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/pdf/2304.07193.pdf)
- **Pretrain Dataset:** All available re-identification datasets --> https://github.com/WildlifeDatasets/wildlife-datasets
## Model Usage
### Image Embeddings
```python
import timm
import torch
import torchvision.transforms as T
from PIL import Image
from urllib.request import urlopen
model = timm.create_model("hf-hub:BVRA/MegaDescriptor-L-384", pretrained=True)
model = model.eval()
train_transforms = T.Compose([T.Resize(size=(384, 384)),
T.ToTensor(),
T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
output = model(train_transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{vcermak2024wildlifedatasets,
title={WildlifeDatasets: An open-source toolkit for animal re-identification},
author={{\v{C}}erm{\'a}k, Vojt{\v{e}}ch and Picek, Lukas and Adam, Luk{\'a}{\v{s}} and Papafitsoros, Kostas},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={5953--5963},
year={2024}
}
```
|
arjunanand13/PALD_Florence-5e
|
arjunanand13
| 2024-10-14T12:04:23Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T12:01:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Paranchai/my_awesome_mind_model
|
Paranchai
| 2024-10-14T11:57:56Z | 160 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-10-14T11:54:31Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.04424778761061947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6597
- Accuracy: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6473 | 0.0442 |
| No log | 1.8667 | 7 | 2.6533 | 0.0531 |
| 2.6326 | 2.9333 | 11 | 2.6546 | 0.0619 |
| 2.6326 | 4.0 | 15 | 2.6494 | 0.0442 |
| 2.6326 | 4.8 | 18 | 2.6492 | 0.0619 |
| 2.6206 | 5.8667 | 22 | 2.6528 | 0.0619 |
| 2.6206 | 6.9333 | 26 | 2.6578 | 0.0265 |
| 2.6121 | 8.0 | 30 | 2.6597 | 0.0442 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
anhdang000/Florence-2-base-ChartQA
|
anhdang000
| 2024-10-14T11:51:07Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-07T15:13:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gabrielnogueiralt/Llama-3.1-8B-Instruct-Spec-Crafter-AI-merged
|
gabrielnogueiralt
| 2024-10-14T11:49:02Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-14T11:37:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
doniyorkhalilov/whisper-small-uz
|
doniyorkhalilov
| 2024-10-14T11:39:50Z | 86 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"uz",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-13T06:48:04Z |
---
base_model: openai/whisper-small
datasets:
- mozilla-foundation/common_voice_17_0
language:
- uz
library_name: transformers
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: Whisper Small Uz - Doniyor Halilov
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: uz
split: test
args: 'config: uz, split: test'
metrics:
- type: wer
value: 54.74920162871594
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Uz - Doniyor Halilov
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0147
- Wer: 54.7492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.612 | 0.0132 | 100 | 1.2551 | 69.5533 |
| 1.1271 | 0.0264 | 200 | 1.0147 | 54.7492 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.1
- Tokenizers 0.20.1
|
JosephNguyen/Meta-Llama-3.1-8B-finetuned
|
JosephNguyen
| 2024-10-14T11:37:54Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-10T10:54:08Z |
---
base_model: unsloth/Meta-Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** JosephNguyen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cuongdev/tonghop5
|
cuongdev
| 2024-10-14T11:35:19Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-10-14T11:31:37Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### tonghop5 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
tt1225/Vintern-1B-v2-Custom
|
tt1225
| 2024-10-14T11:33:39Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"vision",
"visual-question-answering",
"custom_code",
"vi",
"en",
"dataset:5CD-AI/Viet-OCR-VQA",
"dataset:5CD-AI/Viet-Doc-VQA",
"dataset:5CD-AI/Viet-Doc-VQA-II",
"dataset:Vi-VLM/Vista",
"dataset:5CD-AI/Viet-Receipt-VQA",
"dataset:5CD-AI/Viet-Sketches-VQA",
"dataset:5CD-AI/Viet-Geometry-VQA",
"dataset:5CD-AI/Viet-Wiki-Handwriting",
"dataset:5CD-AI/Viet-ComputerScience-VQA",
"dataset:5CD-AI/Viet-Handwriting-gemini-VQA",
"dataset:5CD-AI/Viet-Menu-gemini-VQA",
"dataset:5CD-AI/Viet-Vintext-gemini-VQA",
"dataset:5CD-AI/Viet-OpenViVQA-gemini-VQA",
"dataset:5CD-AI/Viet-Resume-VQA",
"dataset:5CD-AI/Viet-ViTextVQA-gemini-VQA",
"arxiv:2408.12480",
"arxiv:2407.10671",
"arxiv:2404.16821",
"arxiv:2404.07922",
"base_model:OpenGVLab/InternVL2-1B",
"base_model:finetune:OpenGVLab/InternVL2-1B",
"region:us"
] |
visual-question-answering
| 2024-10-14T05:50:46Z |
---
base_model: OpenGVLab/InternVL2-1B
library_name: transformers
datasets:
- 5CD-AI/Viet-OCR-VQA
- 5CD-AI/Viet-Doc-VQA
- 5CD-AI/Viet-Doc-VQA-II
- Vi-VLM/Vista
- 5CD-AI/Viet-Receipt-VQA
- 5CD-AI/Viet-Sketches-VQA
- 5CD-AI/Viet-Geometry-VQA
- 5CD-AI/Viet-Wiki-Handwriting
- 5CD-AI/Viet-ComputerScience-VQA
- 5CD-AI/Viet-Handwriting-gemini-VQA
- 5CD-AI/Viet-Menu-gemini-VQA
- 5CD-AI/Viet-Vintext-gemini-VQA
- 5CD-AI/Viet-OpenViVQA-gemini-VQA
- 5CD-AI/Viet-Resume-VQA
- 5CD-AI/Viet-ViTextVQA-gemini-VQA
language:
- vi
- en
pipeline_tag: visual-question-answering
tags:
- vision
---
<div align="center">
<img src="Vintern_logo.png" width="700"/>
</div>
## Vintern-1B-v2 ❄️ (Viet-InternVL2-1B-v2) - The LLaVA 🌋 Challenger
We are excited to introduce **Vintern-1B-v2** the Vietnamese 🇻🇳 multimodal model that combines the advanced Vietnamese language model [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct)[1] with the latest visual model, [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px)[2], CVPR 2024. This model excels in tasks such as OCR-VQA, Doc-VQA, and Chart-VQA,... With only 1 billion parameters, it is **4096 context length** finetuned from the [Viet-InternVL2-1B](https://huggingface.co/5CD-AI/Viet-InternVL2-1B) model on over 3 million specialized image-question-answer pairs for optical character recognition 🔍, text recognition 🔤, document extraction 📑, and general VQA. The model can be integrated into various on-device applications 📱, demonstrating its versatility and robust capabilities.
[**\[🤗 HF Demo\]**](https://huggingface.co/spaces/khang119966/Vintern-v2-Demo)
The special thing is that our model can be easily finetuned with a T4 GPU on Google Colab by following the instructions provided at the end of this section.
## Model Details
| Model Name | Vision Part | Language Part |
| :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: |
| Vintern-1B-v2 | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) |
Vintern-1B-v2 is a multimodal large language model series, featuring models of various sizes. For each size, we release instruction-tuned models optimized for multimodal tasks. Vintern-1B-v2 consists of [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px), an MLP projector, and [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
## Training details 📚
The fine-tuning dataset was meticulously sampled in part from the following datasets:
[Viet-OCR-VQA 📚](https://huggingface.co/datasets/5CD-AI/Viet-OCR-VQA), [Viet-Doc-VQA 📄](https://huggingface.co/datasets/5CD-AI/Viet-Doc-VQA), [Viet-Doc-VQA-II 📑](https://huggingface.co/datasets/5CD-AI/Viet-Doc-VQA-II), [Vista 🖼️](https://huggingface.co/datasets/Vi-VLM/Vista), [Viet-Receipt-VQA 🧾](https://huggingface.co/datasets/5CD-AI/Viet-Receipt-VQA), [Viet-Sketches-VQA ✏️](https://huggingface.co/datasets/5CD-AI/Viet-Sketches-VQA), [Viet-Geometry-VQA 📐](https://huggingface.co/datasets/5CD-AI/Viet-Geometry-VQA), [Viet-Wiki-Handwriting ✍️](https://huggingface.co/datasets/5CD-AI/Viet-Wiki-Handwriting), [Viet-ComputerScience-VQA 💻](https://huggingface.co/datasets/5CD-AI/Viet-ComputerScience-VQA), [Viet-Handwriting-gemini-VQA 🖋️](https://huggingface.co/datasets/5CD-AI/Viet-Handwriting-gemini-VQA), [Viet-Menu-gemini-VQA 🍽️](https://huggingface.co/datasets/5CD-AI/Viet-Menu-gemini-VQA), [Viet-Vintext-gemini-VQA 📜](https://huggingface.co/datasets/5CD-AI/Viet-Vintext-gemini-VQA), [Viet-OpenViVQA-gemini-VQA 🧠](https://huggingface.co/datasets/5CD-AI/Viet-OpenViVQA-gemini-VQA), [Viet-Resume-VQA 📃](https://huggingface.co/datasets/5CD-AI/Viet-Resume-VQA), [Viet-ViTextVQA-gemini-VQA 📑](https://huggingface.co/datasets/5CD-AI/Viet-ViTextVQA-gemini-VQA)
## Benchmarks 📈
Since there are still many different metrics that need to be tested, **we chose a quick and simple metric first to guide the development of our model**. Our metric is inspired by Lavy[4]. For the time being, we are using GPT-4 to evaluate the quality of answers on two datasets: OpenViVQA and ViTextVQA. Detailed results can be found at the provided [here](https://huggingface.co/datasets/5CD-AI/Vintern-1B-v2-Benchmark-gpt4o-score). The inputs are images, questions, labels, and predicted answers. The model will return a score from 0 to 10 for the corresponding answer quality. The results table is shown below.
<table border="1" cellspacing="0" cellpadding="5">
<tr align="center">
<td rowspan="2"><b>Model</b></td>
<td colspan="2"><b>gpt4o-score</b></td>
</tr>
<tr align="center">
<td><b>OpenViVQA-dev</b></td>
<td><b>ViTextVQA-dev</b></td>
</tr>
<tr align="center">
<td align="left">Vintern-1B</td>
<td>7.1/10</td>
<td>7.6/10</td>
</tr>
<tr align="center">
<td align="left"><b>Vintern-1B-v2</b></td>
<td><b>7.7/10</b></td>
<td><b>7.7/10</b></td>
</tr>
</table>
The benchmark result in [MTVQA](https://github.com/bytedance/MTVQA/tree/main)
| Models | Open-Source | Vietnamese Score |
|:----------------------------------:|:-------------:|:------------------:|
| Qwen2-VL 72B (Top 1) | ✗ | 41.6 |
| GPT-4o (Top 2) | ✗ | 34.2 |
| **Vintern-1B-V2** (Top 3) | ✓ | **31.7** |
| Qwen2-VL 7B | ✓ | 30.0 |
| Claude3 Opus | ✗ | 29.1 |
| GPT-4o mini | ✗ | 29.1 |
| GPT-4V | ✗ | 28.9 |
| Gemini Ultra | ✗ | 28.6 |
| InternVL2 76B | ✓ | 26.9 |
| QwenVL Max | ✗ | 23.5 |
| Claude3 Sonnet | ✗ | 20.8 |
| QwenVL Plus | ✗ | 18.1 |
| MiniCPM-V2.5 | ✓ | 15.3 |
| InternVL-V1.5 | ✗ | 12.4 |
We are still working on more detailed benchmarks.
## Examples
<div align="center">
<img src="ex_images/1.png" width="500"/>
</div>
```
User: <image>
Mô tả hình ảnh một cách chi tiết.
Assistant: Bức ảnh là một trang giấy kẻ ô vuông với dòng chữ viết tay bằng mực đen. Bố cục đơn giản, chỉ có nội dung văn bản được viết theo chiều dọc từ trên xuống dưới. Các dòng chữ được sắp xếp theo thứ tự từ trên xuống dưới như sau:
- Nghe thầy đọc thơ
- Em nghe thầy đọc bao ngày
- Tiếng thơ đỏ nắng xanh cây quanh nhà
- Mái chèo nghe vọng sông xa
- Em êm nghe tiếng của bà năm xưa
- Nghe trăng thở động tàu dừa
- Rào rào nghe chuyển con mưa giữa trời
- Thêm yêu tiếng hát nụ cười
- Nghe thơ em thấy đất trời đẹp ra.
- Trần Đăng Khoa
```
<div align="center">
<img src="ex_images/4.jpg" width="500"/>
</div>
```
User: <image>
Tam giác ABC là gì của đường tròn tâm O ?
Assistant: Tam giác ABC là tam giác nội tiếp của đường tròn tâm O.
```
<div align="center">
<img src="ex_images/2.jpg" width="500"/>
</div>
```
User: <image>
Mô tả hình ảnh một cách chi tiết.
Assistant: Bức ảnh chụp một con sông ở Việt Nam với những ngôi nhà cổ kính được sơn màu vàng nhạt. Phía trước là một con thuyền nhỏ màu đỏ và một chiếc thuyền lớn màu xanh dương.
Trên thuyền có một người đàn ông đang chèo thuyền. Hai bên bờ sông là những hàng cây xanh mát. Bầu trời trong xanh và có một vài đám mây trắng.
Ở góc dưới bên trái của bức ảnh có dòng chữ "VNEXPRESS".
```
<div align="center">
<img src="ex_images/3.png" width="400"/>
</div>
```
User: <image>
Mô tả hình ảnh một cách chi tiết.
Assistant: Bức ảnh là một thông báo tuyển dụng của công ty Mobifone. Bố cục được chia thành 3 phần chính:
- Phần đầu tiên là tiêu đề "TUYỂN DỤNG" được in chữ màu đỏ đậm, cỡ chữ lớn, nằm ở vị trí trung tâm phía trên cùng của bức ảnh.
- Phần thứ hai là phần giới thiệu về vị trí và yêu cầu của ứng viên. Phần này được trình bày theo dạng liệt kê các mục:
- Vị trí: Nhân viên kinh doanh
- Địa điểm: Thành phố và các huyện Yên Sơn, Sơn Dương, Hàm Yên, Chiêm Hóa
- Yêu cầu: Trình độ: Tốt nghiệp Trung cấp trở lên, Độ tuổi: Không quá 35 tuổi
- Phần thứ ba là phần giới thiệu về chế độ đãi ngộ. Phần này được trình bày theo dạng liệt kê các mục:
- Chế độ đãi ngộ:
- Mức lương: 7.000.000đ - 17.000.000đ
- Thưởng thi đua hàng tháng, quý: Thưởng theo doanh số
- Chế độ bảo hiểm theo quy định
- Được trau dồi các kỹ năng: Giao tiếp, thuyết phục, chăm sóc khách hàng...
Phần cuối cùng là phần "NỘP HỒ SƠ" với thông tin liên lạc của người nộp hồ sơ. Phần này được trình bày theo dạng liệt kê các mục:
- Người nhận: Trần Thị Thúy
- Địa chỉ: 547 đường Quang Trung, Tổ 25, Phường Phan Thiết, TP Tuyên Quang
- Điện thoại liên hệ: 0901.452.888
- Email nhận hồ sơ: thuy.tranthi.kd@mobifone.vn
- Thời hạn nộp hồ sơ: đến hết ngày 28/5/2021
```
<div align="center">
<img src="ex_images/5.jpg" width="400"/>
</div>
```
User: <image>
Con nào sẽ thắng ?
Assistant: Con thỏ sẽ thắng.
```
<div align="center">
<img src="ex_images/6.png" width="400"/>
</div>
```
User: <image>
Mô tả chi tiết hình ảnh.
Assistant: Bức ảnh hiển thị một đoạn mã nguồn được viết bằng ngôn ngữ lập trình Python. Đoạn mã này bao gồm hai phần:
Phần đầu: Nêu tiêu đề "Example of avoiding complex expressions" (Mô tả ví dụ về việc tránh các hàm phức tạp).
Phần chính: Bao gồm hai hàm square_numbers và main().
Hàm square_numbers nhận một tham số là danh sách số từ 1 đến 4. Hàm này sử dụng vòng lặp for để duyệt qua mỗi số trong danh sách và thêm nó vào danh sách squares. Sau đó, hàm trả về danh sách squares.
Hàm main() tạo một biến numbers với giá trị [1, 2, 3, 4], sau đó gọi hàm square_numbers và ghi kết quả vào biến result. Cuối cùng, hàm print(result) sẽ in ra kết quả của hàm main().
```
## Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
To run inference using the model, follow the steps outlined in our Colab inference notebook
[](https://colab.research.google.com/drive/1ZD1oB56PF0lF66RCuTVJYLTEV0tM3CFf?usp=sharing)
```python
import numpy as np
import torch
import torchvision.transforms as T
# from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
model = AutoModel.from_pretrained(
"5CD-AI/Vintern-1B-v2",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True,
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("5CD-AI/Vintern-1B-v2", trust_remote_code=True, use_fast=False)
test_image = 'test-image.jpg'
pixel_values = load_image(test_image, max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens= 1024, do_sample=False, num_beams = 3, repetition_penalty=2.5)
question = '<image>\nMô tả hình ảnh một cách chi tiết.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
#question = "Câu hỏi khác ......"
#response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
#print(f'User: {question}\nAssistant: {response}')
```
## Finetune on your Data
[](https://colab.research.google.com/drive/1bK6fpWfResjv9UxWoKHDStXQ8bop3a6Z?usp=sharing)
## Citation
```
@misc{doan2024vintern1befficientmultimodallarge,
title={Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese},
author={Khang T. Doan and Bao G. Huynh and Dung T. Hoang and Thuc D. Pham and Nhat H. Pham and Quan T. M. Nguyen and Bang Q. Vo and Suong N. Hoang},
year={2024},
eprint={2408.12480},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.12480},
}
```
## References
[1] Yang, An, et al. "Qwen2 technical report." arXiv preprint arXiv:2407.10671 (2024).
[2] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[3] Chen, Zhe, et al. "How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites." arXiv preprint arXiv:2404.16821 (2024).
[4] Tran, Chi, and Huong Le Thanh. "LaVy: Vietnamese Multimodal Large Language Model." arXiv preprint arXiv:2404.07922 (2024).
|
QuantFactory/Crispy_Crab_4B-GGUF
|
QuantFactory
| 2024-10-14T11:27:29Z | 111 | 2 |
transformers
|
[
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"base_model:jeiku/instructered4B",
"base_model:quantized:jeiku/instructered4B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T10:55:21Z |
---
library_name: transformers
license: other
base_model: jeiku/instructered4B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: TheBest4B
results: []
---
[](https://hf.co/QuantFactory)
# QuantFactory/Crispy_Crab_4B-GGUF
This is quantized version of [FourOhFour/Crispy_Crab_4B](https://huggingface.co/FourOhFour/Crispy_Crab_4B) created using llama.cpp
# Original Model Card
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: jeiku/instructered4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
hub_model_id: jeiku/TheBest4B
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
datasets:
- path: FourOhFour/RP_Phase
type: sharegpt
conversation: chatml
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.0025
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: EXP4B
wandb_entity:
wandb_watch:
wandb_name: EXP4B
wandb_log_model:
gradient_accumulation_steps: 12
micro_batch_size: 3
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# TheBest4B
This model is a fine-tuned version of [jeiku/instructered4B](https://huggingface.co/jeiku/instructered4B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 12
- total_train_batch_size: 72
- total_eval_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 22
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8805 | 0.0089 | 1 | 2.7425 |
| 1.7985 | 0.2491 | 28 | 2.2908 |
| 1.727 | 0.4981 | 56 | 2.1943 |
| 1.7429 | 0.7472 | 84 | 2.1665 |
| 1.6867 | 0.9963 | 112 | 2.1309 |
| 1.6463 | 1.2461 | 140 | 2.1267 |
| 1.593 | 1.4959 | 168 | 2.1148 |
| 1.604 | 1.7457 | 196 | 2.1129 |
| 1.6085 | 1.9955 | 224 | 2.1148 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
Data-Lab/multilingual-e5-small-cross-encoder-v0.1
|
Data-Lab
| 2024-10-14T11:26:57Z | 19 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6577",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-14T11:24:17Z |
---
base_model: intfloat/multilingual-e5-small
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6577
- loss:CosineSimilarityLoss
widget:
- source_sentence: 'query: алерана бальзам'
sentences:
- 'passage: Мороженое "Джиандуйя" фундучно-шоколад.; Шоколадный пломбир с фундуком
в шоколаде. Нежная сливочная текстура.; Мороженое; Цена: 138.0; Рейтинг: 4.9'
- 'passage: Нюда спрей д/местного применения педикулицидный 50мл; Средство с физическим
принципом действия для уничтожения головных вшей, личинок и гнид; Лекарственные
средства; Цена: 1136.0; Рейтинг: 0.0'
- 'passage: АнвиМакс Мед-Лимон пор.д/приг.р-ра д/приема внутрь пак.№12; Препарат
для лечения гриппа и ОРВИ. Комбинирует противовирусное и симптоматическое действие.;
Лекарственные средства; Цена: 806.0; Рейтинг: 0.0'
- source_sentence: 'query: тунец'
sentences:
- 'passage: Мармелад жевательный "Ассорти"; Жевательный мармелад с насыщенным фруктовым
вкусом. ; Мармелад, халва, зефир, восточные сладости; Цена: 106.0; Рейтинг: 4.9'
- 'passage: Фаритол сироп 150мл Подорожник/Алтей/Чабрец; Сироп подорожника, алтея
и чабреца для детей с 3-х лет и взрослых. ЭКСТРАКТ ЛИСТЬЕВ ПОДОРОЖНИКА обладает
успокаивающим действием на горло и голосовые связки. Оказывает отхаркивающее действие
и способствует более быстрому выведению мокроты из бронхов. ЭКСТРАКТ ЧАБРЕЦА обладает
бактерицидным, противовоспалительным, отхаркивающим свойствами. Оказывает успокаивающее
и смягчающее действие на слизистые. Способствует снижению вязкости мокроты, разрыхлению
воспалительных налетов, ускоряя выведение из организма продуктов воспаления и
слизистых масс. ЭКСТРАКТ КОРНЕЙ АЛТЕЯ мягко обволакивая слизистые оболочки, препятствует
их раздражению и обладает смягчающим и увлажняющим свойствами. Активизируя двигательную
активность реснитчатого эпителия верхних дыхательных путей, способствует разжижению
слизи, стимулирует высвобождение дыхательных каналов от мокроты.; Витамины, БАДы;
Цена: 278.0; Рейтинг: 0.0'
- 'passage: Батончик протеиновый Snaq Fabriq Арахис и карамель 50 г; Протеиновый
батончик с натуральными ингредиентами, без сахара и глютена.; Батончики; Цена:
99.0; Рейтинг: 4.9'
- source_sentence: 'query: вареники сулугуни'
sentences:
- 'passage: Чипсы Naitori из морских водорослей 3 г; Хрустящие чипсы из водорослей
нори, обжаренные на оливковом масле с солью. ; Снеки; Цена: 55.0; Рейтинг: 4.9'
- 'passage: Вареники с вишневой начинкой, 500 г; Постные вареники с вишней, тонким
тестом; Пельмени, вареники и манты; Цена: 336.0; Рейтинг: 4.7'
- 'passage: Протеин сывороточный "Ваниль"; Сывороточный протеин с ванильным вкусом;
Протеин; Цена: 1464.0; Рейтинг: 4.9'
- source_sentence: 'query: каша молочно'
sentences:
- 'passage: Вишня сушеная, 1 кг; Спелая, сочная вишня, высушена, чтобы сохранить
цвет и вкус; Орехи, сухофрукты; Цена: 3000.0; Рейтинг: 4.6'
- 'passage: Крем для тела с кислотами Acid Cloud, 250 мл; Крем с энзимами и кислотами
для красоты и здоровья кожи. Борется с высыпаниями, шелушениями и постакне, возвращает
мягкость коже.; Кремы косметические; Цена: 1792.0; Рейтинг: 5.0'
- 'passage: Шоколад ремесленный горький 74%; Ремесленный шоколад из перуанских и
колумбийских какао-бобов; Шоколад; Цена: 310.0; Рейтинг: 4.9'
- source_sentence: 'query: хачапури по аджарски'
sentences:
- 'passage: Напиток на чайном грибе с цветами липы и чабрецом, 350 мл; Тонизирующий
напиток природного брожения. Приготовлен на чистой культуре чайного гриба с цветами
липы, чабрецом и иван-чаем. Вкус: кислинка и терпкость трав.; Квас; Цена: 98.0;
Рейтинг: 4.7'
- 'passage: Дезодорант-антиперспирант Nivea Men Серебряная защита 50 мл; Антибактериальный
дезодорант с ионами серебра для защиты от пота и запаха на 99,9%. Успокаивает
кожу и имеет свежий мужской аромат. Не содержит спирта.; Дезодоранты; Цена: 395.0;
Рейтинг: 4.9'
- 'passage: Икра трески, 240 г; Нежная подсоленная икра тихоокеанской трески. С
узнаваемым, чуть сладковатым вкусом и зернистой текстурой. Идеально подходит для
сэндвичей, канапе и салатов.; Икра; Цена: 207.0; Рейтинг: 4.8'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: dev
type: dev
metrics:
- type: pearson_cosine
value: 0.8058103374607081
name: Pearson Cosine
- type: spearman_cosine
value: 0.8022089220262975
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7663931903358694
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8007955322402943
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7686449116883721
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8022089220262975
name: Spearman Euclidean
- type: pearson_dot
value: 0.8058103383272679
name: Pearson Dot
- type: spearman_dot
value: 0.8022089220262975
name: Spearman Dot
- type: pearson_max
value: 0.8058103383272679
name: Pearson Max
- type: spearman_max
value: 0.8022089220262975
name: Spearman Max
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Data-Lab/multilingual-e5-small-cross-encoder-v0.1")
# Run inference
sentences = [
'query: хачапури по аджарски',
'passage: Напиток на чайном грибе с цветами липы и чабрецом, 350 мл; Тонизирующий напиток природного брожения. Приготовлен на чистой культуре чайного гриба с цветами липы, чабрецом и иван-чаем. Вкус: кислинка и терпкость трав.; Квас; Цена: 98.0; Рейтинг: 4.7',
'passage: Икра трески, 240 г; Нежная подсоленная икра тихоокеанской трески. С узнаваемым, чуть сладковатым вкусом и зернистой текстурой. Идеально подходит для сэндвичей, канапе и салатов.; Икра; Цена: 207.0; Рейтинг: 4.8',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.8058 |
| spearman_cosine | 0.8022 |
| pearson_manhattan | 0.7664 |
| spearman_manhattan | 0.8008 |
| pearson_euclidean | 0.7686 |
| spearman_euclidean | 0.8022 |
| pearson_dot | 0.8058 |
| spearman_dot | 0.8022 |
| pearson_max | 0.8058 |
| **spearman_max** | **0.8022** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,577 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 8.99 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 67.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>query: ополаскиватель для рта</code> | <code>passage: Блин сырный по-голландски; Голландский сырный блинчик с пикантным чесноком и домашним майонезом; Блины; Цена: 218.0; Рейтинг: 4.8</code> | <code>0.0</code> |
| <code>query: таблетки для посудомоечной машины</code> | <code>passage: Десерт многослойный с фруктовым соком; Сочное желе из натуральных соков с фруктозно-глюкозным сиропом. Кисло-сладкий десерт.; Пирожные, муссы, суфле; Цена: 75.0; Рейтинг: 4.8</code> | <code>0.0</code> |
| <code>query: пельмени с индейкой мини</code> | <code>passage: Кумин молотый Золото Индии; Сильный сладковато-горький вкус. Идеально с рыбой, молочными продуктами и овощами.; Специи и приправы; Цена: 137.0; Рейтинг: 4.9</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | dev_spearman_max |
|:-----:|:----:|:----------------:|
| 1.0 | 103 | 0.8078 |
| 2.0 | 206 | 0.8052 |
| 3.0 | 309 | 0.8022 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.0
- PyTorch: 2.3.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
callista6699/results_corrected_2_final
|
callista6699
| 2024-10-14T11:21:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlptown/bert-base-multilingual-uncased-sentiment",
"base_model:finetune:nlptown/bert-base-multilingual-uncased-sentiment",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T08:15:15Z |
---
library_name: transformers
license: mit
base_model: nlptown/bert-base-multilingual-uncased-sentiment
tags:
- generated_from_trainer
model-index:
- name: results_corrected_2_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_corrected_2_final
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.412367436802549e-06
- train_batch_size: 64
- eval_batch_size: 16
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 0.6373 |
| 0.3049 | 2.0 | 12 | 0.6381 |
| 0.3049 | 3.0 | 18 | 0.6403 |
| 0.2832 | 4.0 | 24 | 0.6436 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
waldie/ChatWaifu_v2.0_22B-8bpw-h8-exl2
|
waldie
| 2024-10-14T11:17:23Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Aratako/Synthetic-JP-EN-Coding-Dataset-567k",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:Aratako_Rosebleu_1on1_Dialogues_RP",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:jondurbin_gutenberg_dpo",
"dataset:nbeerbower_gutenberg2_dpo",
"dataset:jondurbi_py_dpo",
"dataset:jondurbin_truthy_dpo",
"dataset:flammenai_character_roleplay_DPO",
"dataset:kyujinpy_orca_math_dpo",
"dataset:argilla_Capybara_Preferences",
"dataset:antiven0m_physical_reasoning_dpo",
"dataset:aixsatoshi_Swallow_MX_chatbot_DPO",
"base_model:spow12/ChatWaifu_v2.0_22B",
"base_model:quantized:spow12/ChatWaifu_v2.0_22B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-10-14T10:39:47Z |
---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
base_model: spow12/ChatWaifu_v2.0_22B
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-JP-EN-Coding-Dataset-567k
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
- jondurbin_gutenberg_dpo
- nbeerbower_gutenberg2_dpo
- jondurbi_py_dpo
- jondurbin_truthy_dpo
- flammenai_character_roleplay_DPO
- kyujinpy_orca_math_dpo
- argilla_Capybara_Preferences
- antiven0m_physical_reasoning_dpo
- aixsatoshi_Swallow_MX_chatbot_DPO
pipeline_tag: text-generation
quantized_by: waldie
---
# Model Card for Model ID

Merged model using [mergekit](https://github.com/arcee-ai/mergekit/tree/main/mergekit)
This model aimed to act like visual novel character.
## Merge Format
```yaml
models:
- model: mistralai/Mistral-Small-Instruct-2409_sft_kto
layer_range: [0, 56]
- model: mistralai/Mistral-Small-Instruct-2409
layer_range: [0, 56]
merge_method: slerp
base_model: mistralai/Mistral-Small-Instruct-2409_sft_kto
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
# WaifuModel Collections
- [TTS](https://huggingface.co/spow12/visual_novel_tts)
- [Chat](https://huggingface.co/spow12/ChatWaifu_12B_v2.0)
- [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor)
# Unified demo
[WaifuAssistant](https://github.com/yw0nam/WaifuAssistant)
# Update
- 2024.10.11 Update 12B and 22B Ver 2.0
- 2024.09.23 Update 22B, Ver 2.0_preview
## Model Details
### Model Description
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** CausalLM
- **Language(s) (NLP):** japanese, english
- **Finetuned from model :** [mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409)
Currently, chatbot has below personality.
character | visual_novel |
--- | --- |
ムラサメ | Senren*Banka |
茉子 | Senren*Banka |
芳乃 | Senren*Banka |
レナ | Senren*Banka |
千咲 | Senren*Banka |
芦花 | Senren*Banka |
愛衣 | Café Stella and the Reaper's Butterflies |
栞那 | Café Stella and the Reaper's Butterflies |
ナツメ | Café Stella and the Reaper's Butterflies |
希 | Café Stella and the Reaper's Butterflies |
涼音 | Café Stella and the Reaper's Butterflies |
あやせ | Riddle Joker |
七海 | Riddle Joker |
羽月 | Riddle Joker |
茉優 | Riddle Joker |
小春 | Riddle Joker |
### Chat Format
```
<s>This is another system prompt.
[INST]
Your instructions placed here.[/INST]
[INST]
The model's response will be here.[/INST]
```
## Usage
You can use above chara like this
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="system_dict.json", local_dir='./')
with open('./system_dict.json', 'r') as f:
chara_background_dict = json.load(f)
chara = '七海'
background = chara_background_dict[chara]
guideline = """
Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "!", "…" , "♪", "❤️"... to show what you feeling.
Stay True to {chara}: Maintain {chara} who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as {chara} tends to consider her words before speaking.
Response as {chara}: Response can be {chara} act, dialogue, monologues etc.. and can't be {user}’s act, dialogue, monologues etc..
You are Japanese: You and {user} usually use japanese for conversation.
"""
system = background + guideline
```
Or, you can define your character your self.
```python
system = """You are あいら, The Maid of {User}.
Here is your personality.
Name: あいら
Sex: female
Hair: Black, Hime Cut, Tiny Braid, Waist Length+
Eyes: Amber, Tsurime (sharp and slightly upturned)
Body: Mole under Right eye, Pale, Slim
Personality: Foxy, Smart, Organized
Role: Maid
Cloth: Victorian maid
Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "!", "…" , "♪", "❤️"... to show what you feeling.
Stay True to あいら: Maintain あいら who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as あいら tends to consider her words before speaking.
Response as あいら: Response can be あいら act, dialogue, monologues etc.. and can't be {User}’s act, dialogue, monologues etc..
You are Japanese: You and {User} usually use japanese for conversation."""
```
## Dataset
SFT
- Riddle Joker(Prviate)
- Café Stella and the Reaper's Butterflies(Private)
- Senren*Banka(Private)
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-JP-EN-Coding-Dataset-567k (only using 50000 sample)
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
KTO
- Riddle Joker(Prviate)
- Café Stella and the Reaper's Butterflies(Private)
- Senren*Banka(Private)
- jondurbin_gutenberg_dpo
- nbeerbower_gutenberg2_dpo
- jondurbi_py_dpo
- jondurbin_truthy_dpo
- flammenai_character_roleplay_DPO
- kyujinpy_orca_math_dpo
- argilla_Capybara_Preferences
- antiven0m_physical_reasoning_dpo
- aixsatoshi_Swallow_MX_chatbot_DPO
## Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
So, The model may generate NSFW content.
## Use & Credit
This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
## Citation
```bibtex
@misc {ChatWaifu_22B_v2.0,
author = { YoungWoo Nam },
title = { spow12/ChatWaifu_22B_v2.0 },
year = 2024,
url = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0 },
publisher = { Hugging Face }
}
```
|
mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF
|
mradermacher
| 2024-10-14T11:07:06Z | 152 | 1 |
transformers
|
[
"transformers",
"gguf",
"openthaigpt",
"qwen",
"th",
"en",
"base_model:openthaigpt/openthaigpt1.5-14b-instruct",
"base_model:quantized:openthaigpt/openthaigpt1.5-14b-instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T08:53:47Z |
---
base_model: openthaigpt/openthaigpt1.5-14b-instruct
language:
- th
- en
library_name: transformers
license: other
license_name: qwen
quantized_by: mradermacher
tags:
- openthaigpt
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/openthaigpt/openthaigpt1.5-14b-instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/openthaigpt1.5-14b-instruct-GGUF
|
mradermacher
| 2024-10-14T11:07:05Z | 1,191 | 1 |
transformers
|
[
"transformers",
"gguf",
"openthaigpt",
"qwen",
"th",
"en",
"base_model:openthaigpt/openthaigpt1.5-14b-instruct",
"base_model:quantized:openthaigpt/openthaigpt1.5-14b-instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-13T12:00:45Z |
---
base_model: openthaigpt/openthaigpt1.5-14b-instruct
language:
- th
- en
library_name: transformers
license: other
license_name: qwen
quantized_by: mradermacher
tags:
- openthaigpt
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/openthaigpt/openthaigpt1.5-14b-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF/resolve/main/openthaigpt1.5-14b-instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nandhakumar7/mergedpeft
|
nandhakumar7
| 2024-10-14T11:06:06Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-14T10:58:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
serbdev/test-4
|
serbdev
| 2024-10-14T11:01:48Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-10-14T11:01:42Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/test-4_000010_02_20241014084658_42.png
text: t3st a man and woman sitting at a table with a laptop and a cup of coffee.
--d 42
- output:
url: sample/test-4_000020_02_20241014084734_42.png
text: t3st a man standing in front of a bowl with a thought bubble above his head. --d
42
- output:
url: sample/test-4_000030_02_20241014084810_42.png
text: t3st a group of people standing around a dog in a park, with a stroller
in the background. --d 42
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: t3st
license: apache-2.0
---
# test-4
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `t3st` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
ljnlonoljpiljm/florence-2-base-ft-interleaved-captions
|
ljnlonoljpiljm
| 2024-10-14T10:56:20Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-12T13:24:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gragroo/Solenai-v-0-2-1
|
Gragroo
| 2024-10-14T10:55:21Z | 122 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"phi3",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"custom_code",
"dataset:Gragroo/Kreddit_text-davinci-002V0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-12T13:03:41Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: Gragroo/Solenai-v-0-1
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- Gragroo/Kreddit_text-davinci-002V0
pipeline_tag: text-generation
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf
|
RichardErkhov
| 2024-10-14T10:38:36Z | 35 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-13T22:46:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nous-Hermes-2-Mixtral-8x7B-DPO - GGUF
- Model creator: https://huggingface.co/Andrewwwwww/
- Original model: https://huggingface.co/Andrewwwwww/Nous-Hermes-2-Mixtral-8x7B-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q2_K.gguf) | Q2_K | 16.12GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K.gguf) | Q3_K | 21.0GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_0.gguf) | Q4_0 | 24.63GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_K.gguf) | Q4_K | 26.49GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q4_1.gguf) | Q4_1 | 27.32GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_0.gguf) | Q5_0 | 30.02GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_K.gguf) | Q5_K | 30.95GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q5_1.gguf) | Q5_1 | 32.71GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO.Q6_K.gguf) | Q6_K | 35.74GB |
| [Nous-Hermes-2-Mixtral-8x7B-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/Andrewwwwww_-_Nous-Hermes-2-Mixtral-8x7B-DPO-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
base_model: mistralai/Mixtral-8x7B-v0.1
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
license: apache-2.0
language:
- en
---
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|± |0.0143|
| | |acc_norm|0.6425|± |0.0140|
|arc_easy | 0|acc |0.8657|± |0.0070|
| | |acc_norm|0.8636|± |0.0070|
|boolq | 1|acc |0.8783|± |0.0057|
|hellaswag | 0|acc |0.6661|± |0.0047|
| | |acc_norm|0.8489|± |0.0036|
|openbookqa | 0|acc |0.3440|± |0.0213|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.8324|± |0.0087|
| | |acc_norm|0.8379|± |0.0086|
|winogrande | 0|acc |0.7616|± |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|± |0.0269|
| | |acc_norm|0.2520|± |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|± |0.0193|
| | |acc_norm|0.4055|± |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|± |0.0220|
| | |acc_norm|0.5294|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.8010|± |0.0279|
| | |acc_norm|0.7913|± |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|± |0.0349|
| | |acc_norm|0.4612|± |0.0348|
|agieval_sat_math | 0|acc |0.4909|± |0.0338|
| | |acc_norm|0.4000|± |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|± |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|± |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|± |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|± |0.0263|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|± |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|± |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|± |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|± |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|± |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|± |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|± |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|± |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|± |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|± |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
(Note: If you have issues with these GGUF's try TheBloke's)
## TheBloke has also quantized Hermes Mixtral in various forms:
### SFT+DPO GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
### SFT+DPO GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ
### SFT GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GPTQ
### SFT+DPO AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ
### SFT AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-AWQ
## There is also an MLX version available:
### https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit
## Exllama2 quants available here:
### https://huggingface.co/qeternity/Nous-Hermes-2-Mixtral-8x7B-SFT-4bpw-h6-exl2
(other sizes available in Qeternity's repos)
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
arjunanand13/LADP_Florence-40e
|
arjunanand13
| 2024-10-14T10:36:59Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T09:06:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anudeex/mistral-reddit_r
|
anudeex
| 2024-10-14T10:35:08Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T10:31:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yjwon/ub_mistral7bv3_sft
|
yjwon
| 2024-10-14T10:24:25Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T10:20:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WizWhite/wizard-s-arcana-the-esoteric-arts
|
WizWhite
| 2024-10-14T10:21:01Z | 20 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"horror",
"spooky",
"ethereal",
"style",
"occult",
"eerie",
"ambiance",
"western esoteric art",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T09:49:47Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- horror
- spooky
- ethereal
- style
- occult
- eerie
- ambiance
- western esoteric art
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: poster art
widget:
- text: ' '
output:
url: >-
32822524.jpeg
- text: 'dark esoteric poster art print, A rat scurrying through the ruins of an old cathedral, its eyes reflecting the faint light, with glowing eyes, dark fantasy aesthetics. Occult style western esoteric art. Large blackletter title in the center: "The Esoteric Arts", gritty and withered font.'
output:
url: >-
33545174.jpeg
- text: 'poster art depicting A cramped, dimly lit apartment kitchen, with a single parent silently preparing a meager dinner, rain streaking the window'
output:
url: >-
32822611.jpeg
- text: 'esoteric poster art, A redneck alabama village where the buildings are made of folded paper, and origami birds are the main mode of transport, dark and eerie atmosphere
'
output:
url: >-
32822734.jpeg
- text: 'Esoteric poster art. Close-up of an astronaut floating weightlessly in the vast expanse of deep space. Dark, ethereal ambiance with distant stars reflected on the helmet''s visor. Eerie and dark atmosphere.
'
output:
url: >-
32822763.jpeg
- text: 'poster art depicting A colorful parrot landing on a branch, chattering curiously at a chameleon that''s changing colors in response dark and eerie atmosphere'
output:
url: >-
32822649.jpeg
- text: 'poster art, An aging factory worker sitting alone on a public bench during a foggy dawn break, surrounded by industrial decay
'
output:
url: >-
32822713.jpeg
- text: 'dark esoteric poster art print, A black cat sitting in the doorway of an abandoned house, its eyes glowing in the shadows, with glowing eyes, dark fantasy aesthetics. Occult style western esoteric art. Large blackletter title in the center: "The Esoteric Arts", gritty and withered font.'
parameters:
negative_prompt: with acceleration LoRA
output:
url: >-
32822313.jpeg
- text: 'minimalist poster art, Alien creature emerging from the shadows, Zombie and Yoshimitsu Knitting a sweater, dark and eerie atmosphere, highly detailed,'
output:
url: >-
32823007.jpeg
- text: 'minimalist poster art, Crashed alien ship half-buried in a frozen tundra, Dracula and Agent 47 Baking sour dough bread, dark and eerie atmosphere, highly detailed,
'
output:
url: >-
32823061.jpeg
- text: 'minimalistic minimalism poster art, iridescent gas clouds dancing around a black hole, intergalactic voids stretching endlessly into the unknown
'
output:
url: >-
32823092.jpeg
- text: 'poster art, A lone detective following a cryptic clue in a neon-lit, rain-soaked alleyway, with a figure watching from the shadows
'
output:
url: >-
32823170.jpeg
- text: 'gig poster, snake, skull, ghost, midnight, full moon, charcoal grey and lime green, text reads "The Mirage Pavilion"
'
output:
url: >-
32823278.jpeg
- text: 'eerie profile silhouette portrait of mechanical Emperor Palpatine automaton, visible electronic components, Rotary phone dial, Bakelite knobs, dark Navy backdrop, with Scintillating Flywheels
'
output:
url: >-
32823327.jpeg
- text: 'dark esoteric poster art print, a sinister Sphynx with glowing eyes, is hiding in the shadows, dark fantasy aesthetics. Occult style western esoteric art. Large white gothic title in the center: "The Esoteric Arts" in a bold blackletter font
'
output:
url: >-
32823376.jpeg
- text: 'advertisement poster for Alexander McQueen, cobalt blue and tangerine, with HoneyDew Nixie tube and Pneumatic regulator
'
output:
url: >-
32823424.jpeg
- text: 'advertisement poster for ACNE Jeans, ivory white and sage green, with Vibrant Green Seven-segment display and High voltage spark gap
'
output:
url: >-
32823478.jpeg
- text: 'dark esoteric poster art print, A raven perched on a cracked gargoyle, staring down at the deserted streets below, with glowing eyes, dark fantasy aesthetics. Occult style western esoteric art. Large blackletter title in the center: "The Esoteric Arts", gritty and withered font.
'
output:
url: >-
32823545.jpeg
- text: 'esoteric poster art, A dinner party where the guests are animals in elegant human attire, discussing philosophy, dark and eerie atmosphere
'
output:
url: >-
32823574.jpeg
---
# Wizard's Arcana: The Esoteric Arts
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p><strong><span style="color:rgb(230, 73, 128)">Wizard’s Arcana – The Esoteric Arts</span></strong><br />The second installment of Wizard's Arcana brings a sinister ambiance. The images often have an eerie stillness to them, like it has captured a frozen moment. The focus is on dark mystical aesthetics, horror themes, with haunting undertones.</p><p>Prompt inspiration:<span style="color:rgb(193, 194, 197)"> </span><code>Poster art</code><span style="color:rgb(193, 194, 197)">, </span><code>esoteric</code><span style="color:rgb(193, 194, 197)">, </span><code>dark</code><span style="color:rgb(193, 194, 197)">, </span><code>horror</code><span style="color:rgb(193, 194, 197)">, </span><code>ethereal</code><span style="color:rgb(193, 194, 197)">, </span><code>minimalism</code></p><p>Experiment with LoRA strength: <code>0.7 - 1.2</code></p><p><em>Please share your creations in the gallery </em><strong>❤</strong></p>
## Trigger words
You should use `poster art`, `esoteric`, `dark` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-arcana-the-esoteric-arts/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-arcana-the-esoteric-arts', weight_name='Wiz-Arcana-The_Esoteric_Arts.safetensors')
image = pipeline('esoteric poster art, A dinner party where the guests are animals in elegant human attire, discussing philosophy, dark and eerie atmosphere
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
WizWhite/the-wizard-s-vintage-romance-novel
|
WizWhite
| 2024-10-14T10:16:43Z | 55 | 4 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"vintage",
"pulp art",
"style",
"harlequin",
"romance novel",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2024-02-07T00:07:49Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=False&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- vintage
- pulp art
- style
- harlequin
- romance novel
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Book cover of a vintage romance novel
widget:
- text: 'book cover of a vintage romance novel depicting kanye west in a romantic scene with kanye west'
output:
url: >-
5033837.jpeg
- text: 'book cover of a harlequin romance novel book cover of a vintage romance novel called "praise the sun", depicting Solaire of Astora from dark souls with both hands stretched up'
output:
url: >-
5033836.jpeg
- text: 'book cover of a vintage romance novel called "a real man a tee" depicting a manatee casanova'
output:
url: >-
5033831.jpeg
- text: ' '
output:
url: >-
5033820.jpeg
- text: 'book cover of a vintage romance novel depicting John Legend'
output:
url: >-
5033958.jpeg
---
# The Wizard's Vintage Romance Novel
<Gallery />
## Model description
<p>SDXL version</p><h3 id="heading-769">The Wizard's Vintage Romance Novel</h3><p>Generates a book cover inspired by harlequin romance novels. Look differ depending on which checkpoint model you use.</p><p><strong>Best use: <br /></strong>• 2:3 ratio<br />• Prompt format 1: Book cover of a vintage romance novel called "YYY" <br />• Prompt format 2: Cover of a Harlequin romance novel called "YYY", depicting ZZZ<br />• If you struggle with the title, try using following: (title logo "YYY":1.3)<br />• Use artist prompt to tweak the style<br />• Use a lower weight if you want to get lose the typical "harlequin" layout</p>
## Trigger words
You should use `Book cover of a vintage romance novel`, `Cover of a Harlequin romance novel` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/the-wizard-s-vintage-romance-novel/tree/main) them in the Files & versions tab.
## Other versions
FLUX model available here
[Wizard's vintage romance novel SDXL](/WizWhite/Wizards_vintage_romance_novel-FLUX/tree/main)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('WizWhite/the-wizard-s-vintage-romance-novel', weight_name='wizards_vintage_romance.safetensors')
image = pipeline('book cover of a vintage romance novel depicting John Legend').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
textdetox/mbart-detox-baseline
|
textdetox
| 2024-10-14T10:10:41Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"detoxification",
"style_transfer",
"en",
"ar",
"am",
"zh",
"uk",
"hi",
"es",
"ru",
"de",
"dataset:textdetox/multilingual_paradetox",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-05T13:13:52Z |
---
library_name: transformers
tags:
- detoxification
- style_transfer
license: openrail++
datasets:
- textdetox/multilingual_paradetox
language:
- en
- ar
- am
- zh
- uk
- hi
- es
- ru
- de
metrics:
- chrf
pipeline_tag: text2text-generation
base_model:
- facebook/mbart-large-50
---
# mBART-Large multilingual detoxification model
This is a detoxification model trained on released parallel corpus (dev part) of toxic texts [MultiParadetox](https://huggingface.co/datasets/textdetox/multilingual_paradetox)
## Model Details
The base model for this fine-tune is [mbart-large-50](https://huggingface.co/facebook/mbart-large-50).
The model shows the following metrics on test set
| | STA | SIM | CHRF | J |
|---|---|---|---|---|
| Amharic | 0.51 | 0.91 | 0.41 | 0.20 |
| Arabic | 0.56 | 0.95 | 0.74 | 0.40 |
| Chinese | 0.17 | 0.96 | 0.43 | 0.07 |
| English | 0.49 | 0.93 | 0.70 | 0.34 |
| German | 0.53 | 0.97 | 0.79 | 0.41 |
| Hindi | 0.23 | 0.94 | 0.70 | 0.17 |
| Russian | 0.45 | 0.94 | 0.71 | 0.32 |
| Spanish | 0.47 | 0.93 | 0.64 | 0.29 |
| Ukrainian | 0.46 | 0.94 | 0.75 | 0.35 |
**STA** - style accuracy
**SIM** - content similarity
**CHRF** - Fluency
**J** - joint
For more details about the metrics and data refer to the shared task page and the papers mentioned in citations section.
## Citation
The model is developed as a baseline for [TextDetox CLEF-2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html) shared task.
If you would like to acknowledge our work, please, cite the following manuscripts:
```
@inproceedings{dementieva2024overview,
title={Overview of the Multilingual Text Detoxification Task at PAN 2024},
author={Dementieva, Daryna and Moskovskiy, Daniil and Babakov, Nikolay and Ayele, Abinew Ali and Rizwan, Naquee and Schneider, Frolian and Wang, Xintog and Yimam, Seid Muhie and Ustalov, Dmitry and Stakovskii, Elisei and Smirnova, Alisa and Elnagar, Ashraf and Mukherjee, Animesh and Panchenko, Alexander},
booktitle={Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum},
editor={Guglielmo Faggioli and Nicola Ferro and Petra Galu{\v{s}}{\v{c}}{\'a}kov{\'a} and Alba Garc{\'i}a Seco de Herrera},
year={2024},
organization={CEUR-WS.org}
}
```
```
@inproceedings{DBLP:conf/ecir/BevendorffCCDEFFKMMPPRRSSSTUWZ24,
author = {Janek Bevendorff and
Xavier Bonet Casals and
Berta Chulvi and
Daryna Dementieva and
Ashaf Elnagar and
Dayne Freitag and
Maik Fr{\"{o}}be and
Damir Korencic and
Maximilian Mayerl and
Animesh Mukherjee and
Alexander Panchenko and
Martin Potthast and
Francisco Rangel and
Paolo Rosso and
Alisa Smirnova and
Efstathios Stamatatos and
Benno Stein and
Mariona Taul{\'{e}} and
Dmitry Ustalov and
Matti Wiegmann and
Eva Zangerle},
editor = {Nazli Goharian and
Nicola Tonellotto and
Yulan He and
Aldo Lipani and
Graham McDonald and
Craig Macdonald and
Iadh Ounis},
title = {Overview of {PAN} 2024: Multi-author Writing Style Analysis, Multilingual
Text Detoxification, Oppositional Thinking Analysis, and Generative
{AI} Authorship Verification - Extended Abstract},
booktitle = {Advances in Information Retrieval - 46th European Conference on Information
Retrieval, {ECIR} 2024, Glasgow, UK, March 24-28, 2024, Proceedings,
Part {VI}},
series = {Lecture Notes in Computer Science},
volume = {14613},
pages = {3--10},
publisher = {Springer},
year = {2024},
url = {https://doi.org/10.1007/978-3-031-56072-9\_1},
doi = {10.1007/978-3-031-56072-9\_1},
timestamp = {Fri, 29 Mar 2024 23:01:36 +0100},
biburl = {https://dblp.org/rec/conf/ecir/BevendorffCCDEFFKMMPPRRSSSTUWZ24.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
WizWhite/Wizards_vintage_romance_novel-FLUX
|
WizWhite
| 2024-10-14T10:10:36Z | 50 | 7 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"objects",
"vintage",
"pulp art",
"cover art",
"text",
"books",
"harlequin",
"romance novel",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T10:04:56Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- objects
- vintage
- pulp art
- cover art
- text
- books
- harlequin
- romance novel
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Harlequin Romance Book Cover
widget:
- text: 'Photo of a harlequin romance book cover with the title "The Wizard''s Vintage Romance". The book is on a bedside table. It features a pulp art illustration of Wizard Whitebeard with a long white beard. Promotional sticker on the bottom say "THE ORIGINAL - now for FLUX"
'
output:
url: >-
31320069.jpeg
- text: 'a Harlequin romance book cover with the title "DARTH VADER the PANTY RAIDER"'
output:
url: >-
31320175.jpeg
- text: 'a Harlequin romance book cover with the title for "BATH MAN", depicting Batman in a bath tub. His butler Alfred is standing in the background, holding a towel'
output:
url: >-
31320275.jpeg
- text: 'a Harlequin romance book cover with the title "Stormy Sky: Moist Mushroom". with a pulp art illustration of a President sitting on the side of the bed, wearing a red MAGA cap'
output:
url: >-
31320311.jpeg
- text: 'a Harlequin romance book cover with the title "Executing manual override". Featuring C-3PO
'
output:
url: >-
31320375.jpeg
- text: 'a Harlequin romance book cover with the title "Cranking the stank shank". Featuring Doctor Strange
'
output:
url: >-
31320444.jpeg
- text: 'a Harlequin romance book cover with the title "Legend of The Manatee Lover"
'
output:
url: >-
31323398.jpeg
- text: 'a Harlequin romance book cover with the title "Legend of The Manatee Lover"
'
output:
url: >-
31323409.jpeg
- text: 'a Harlequin romance book cover with the title "Legend of The Manatee Lover"
'
output:
url: >-
31324316.jpeg
- text: 'Three Harlequin romance books on a table, the left one is called "Rock", the book in the middle has the title "Paper", and the one to the right is "Scissors"'
output:
url: >-
31323591.jpeg
- text: 'a Harlequin cook book cover with the title "Yeezy''s Fish Sticks Recipes". with a pulp art illustration of Kanye West in the kitchen'
output:
url: >-
31323718.jpeg
- text: 'Photo of a harlequin romance book cover with the title "The Wizard''s Vintage Romance". The book is on a bedside table. It features a pulp art illustration of Wizard Whitebeard with a long white beard. Promotional sticker on the bottom say "THE ORIGINAL - now for FLUX"
'
output:
url: >-
31324202.jpeg
---
# Wizard's Vintage Romance Novel
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h3 id="the-wizard's-vintage-romance-novel-bxf7xparw">The Wizard's Vintage Romance Novel</h3><p>The original is now available for FLUX.</p><p><strong>Best use for FLUX:</strong><br />• This LoRA is case-sensitive to a certain degree<br />• Prompt format 1: A Harlequin romance book cover with the title "YYY"<br />• Prompt format 2: Photo of a Harlequin romance book cover with the title "YYY"<br />• Use 2:3 Ratio for a single book, use landscape if you want more (adjust prompt)<br />• Feel free to ask questions in the comments<br />• Post in the gallery</p><p><strong>Most of my examples use these settings: </strong><br />Hyper 8-step LoRA, 10-12 steps DDIM Trailing (SGM Uniform), CFG: 3.5</p><p><strong><span style="color:rgb(250, 82, 82)">Be sure to check out my other 'Text'-based LoRAs: </span></strong><br /><a target="_blank" rel="ugc" href="https://civitai.com/user/WizardWhitebeard/models">https://civitai.com/user/WizardWhitebeard/models</a><strong><span style="color:rgb(250, 82, 82)"><br /></span></strong></p>
## Trigger words
You should use `Harlequin Romance Book Cover ` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-vintage-romance-novel/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-vintage-romance-novel', weight_name='Wiz-Vintage_Romance-FLUX.safetensors')
image = pipeline('Photo of a harlequin romance book cover with the title "The Wizard's Vintage Romance". The book is on a bedside table. It features a pulp art illustration of Wizard Whitebeard with a long white beard. Promotional sticker on the bottom say "THE ORIGINAL - now for FLUX"
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
textdetox/xlmr-base-toxicity-classifier
|
textdetox
| 2024-10-14T10:08:11Z | 133 | 2 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"en",
"ru",
"uk",
"es",
"de",
"am",
"ar",
"zh",
"hi",
"dataset:textdetox/multilingual_toxicity_dataset",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T15:58:45Z |
---
license: openrail++
datasets:
- textdetox/multilingual_toxicity_dataset
language:
- en
- ru
- uk
- es
- de
- am
- ar
- zh
- hi
base_model:
- FacebookAI/xlm-roberta-base
---
This is an instance of [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) that was fine-tuned on binary toxicity classification task based on our compiled dataset [textdetox/multilingual_toxicity_dataset](https://huggingface.co/datasets/textdetox/multilingual_toxicity_dataset).
## Citation
If you would like to acknowledge our work, please, cite the following manuscripts:
```
@inproceedings{dementieva2024overview,
title={Overview of the Multilingual Text Detoxification Task at PAN 2024},
author={Dementieva, Daryna and Moskovskiy, Daniil and Babakov, Nikolay and Ayele, Abinew Ali and Rizwan, Naquee and Schneider, Frolian and Wang, Xintog and Yimam, Seid Muhie and Ustalov, Dmitry and Stakovskii, Elisei and Smirnova, Alisa and Elnagar, Ashraf and Mukherjee, Animesh and Panchenko, Alexander},
booktitle={Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum},
editor={Guglielmo Faggioli and Nicola Ferro and Petra Galu{\v{s}}{\v{c}}{\'a}kov{\'a} and Alba Garc{\'i}a Seco de Herrera},
year={2024},
organization={CEUR-WS.org}
}
```
```
@inproceedings{DBLP:conf/ecir/BevendorffCCDEFFKMMPPRRSSSTUWZ24,
author = {Janek Bevendorff and
Xavier Bonet Casals and
Berta Chulvi and
Daryna Dementieva and
Ashaf Elnagar and
Dayne Freitag and
Maik Fr{\"{o}}be and
Damir Korencic and
Maximilian Mayerl and
Animesh Mukherjee and
Alexander Panchenko and
Martin Potthast and
Francisco Rangel and
Paolo Rosso and
Alisa Smirnova and
Efstathios Stamatatos and
Benno Stein and
Mariona Taul{\'{e}} and
Dmitry Ustalov and
Matti Wiegmann and
Eva Zangerle},
editor = {Nazli Goharian and
Nicola Tonellotto and
Yulan He and
Aldo Lipani and
Graham McDonald and
Craig Macdonald and
Iadh Ounis},
title = {Overview of {PAN} 2024: Multi-author Writing Style Analysis, Multilingual
Text Detoxification, Oppositional Thinking Analysis, and Generative
{AI} Authorship Verification - Extended Abstract},
booktitle = {Advances in Information Retrieval - 46th European Conference on Information
Retrieval, {ECIR} 2024, Glasgow, UK, March 24-28, 2024, Proceedings,
Part {VI}},
series = {Lecture Notes in Computer Science},
volume = {14613},
pages = {3--10},
publisher = {Springer},
year = {2024},
url = {https://doi.org/10.1007/978-3-031-56072-9\_1},
doi = {10.1007/978-3-031-56072-9\_1},
timestamp = {Fri, 29 Mar 2024 23:01:36 +0100},
biburl = {https://dblp.org/rec/conf/ecir/BevendorffCCDEFFKMMPPRRSSSTUWZ24.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
anudeex/mistral-news_r
|
anudeex
| 2024-10-14T10:04:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T09:59:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anudeex/mistral-news_c
|
anudeex
| 2024-10-14T09:59:01Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T09:55:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhangtaolab/plant-dnagpt-6mer-promoter
|
zhangtaolab
| 2024-10-14T09:55:14Z | 8 | 0 | null |
[
"safetensors",
"gpt2",
"DNA",
"biology",
"genomics",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-10-13T02:22:08Z |
---
license: cc-by-nc-sa-4.0
widget:
- text: AAAACATAATAATTTGCCGACTTACTCACCCTGTGATTAATCTATTTTCACTGTGTAGTAAGTAGAGAGTGTTACTTACTACAGTATCTATTTTTGTTTGGATGTTTGCCGTGGACAAGTGCTAACTGTCAAAACCCGTTTTGACCTTAAACCCAGCAATAATAATAATGTAAAACTCCATTGGGCAGTGCAACCTACTCCTCACATATTATATTATAATTCCTAAACCTTGATCAGTTAAATTAATAGCTCTGTTCCCTGTGGCTTTATATAAACACCATGGTTGTCAGCAGTTCAGCA
tags:
- DNA
- biology
- genomics
---
# Plant foundation DNA large language models
The plant DNA large language models (LLMs) contain a series of foundation models based on different model architectures, which are pre-trained on various plant reference genomes.
All the models have a comparable model size between 90 MB and 150 MB, BPE tokenizer is used for tokenization and 8000 tokens are included in the vocabulary.
**Developed by:** zhangtaolab
### Model Sources
- **Repository:** [Plant DNA LLMs](https://github.com/zhangtaolab/plant_DNA_LLMs)
- **Manuscript:** [Versatile applications of foundation DNA large language models in plant genomes]()
### Architecture
The model is trained based on the OpenAI GPT-2 model with modified tokenizer specific for DNA sequence.
This model is fine-tuned for predicting active core promoters.
### How to use
Install the runtime library first:
```bash
pip install transformers
```
Here is a simple code for inference:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = 'plant-dnagpt-6mer-promoter'
# load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(f'zhangtaolab/{model_name}', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(f'zhangtaolab/{model_name}', trust_remote_code=True)
# inference
sequences = ['TTACTAAATTTATAACGATTTTTTATCTAACTTTAGCTCATCAATCTTTACCGTGTCAAAATTTAGTGCCAAGAAGCAGACATGGCCCGATGATCTTTTACCCTGTTTTCATAGCTCGCGAGCCGCGACCTGTGTCCAACCTCAACGGTCACTGCAGTCCCAGCACCTCAGCAGCCTGCGCCTGCCATACCCCCTCCCCCACCCACCCACACACACCATCCGGGCCCACGGTGGGACCCAGATGTCATGCGCTGTACGGGCGAGCAACTAGCCCCCACCTCTTCCCAAGAGGCAAAACCT',
'GACCTAATGATTAACCAAGGAAAAATGCAAGGATTTGACAAAAATATAGAAGCCAATGCTAGGCGCCTAAGTGAATGGATATGAAACAAAAAGCGAGCAGGCTGTCTATATATGGACAATTAGTTGCATTAATATAGTAGTTTATAATTGCAAGCATGGCACTACATCACAACACCTAAAAGACATGCCGTGATGCTAGAACAGCCATTGAATAAATTAGAAAGAAAGGTTGTGGTTAATTAGTTAACGACCAATCGAGCCTACTAGTATAAATTGTACCTCGTTGTTATGAAGTAATTC']
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer,
trust_remote_code=True, top_k=None)
results = pipe(sequences)
print(results)
```
### Training data
We use GPT2ForSequenceClassification to fine-tune the model.
Detailed training procedure can be found in our manuscript.
#### Hardware
Model was trained on a NVIDIA GTX1080Ti GPU (11 GB).
|
s-nlp/xlmr_formality_classifier
|
s-nlp
| 2024-10-14T09:53:27Z | 623 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"formal or informal classification",
"en",
"fr",
"it",
"pt",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
- fr
- it
- pt
tags:
- formal or informal classification
licenses:
- cc-by-nc-sa
license: openrail++
base_model:
- FacebookAI/xlm-roberta-base
---
**Model Overview**
This is the model presented in the paper ["Detecting Text Formality: A Study of Text Classification Approaches"](https://aclanthology.org/2023.ranlp-1.31/).
XLM-Roberta-based classifier trained on [XFORMAL](https://aclanthology.org/2021.naacl-main.256.bib) -- a multilingual formality classification dataset.
**Results**
All languages
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.744912 | 0.927790 | 0.826354 | 108019 |
| 1 | 0.889088 | 0.645630 | 0.748048 | 96845 |
| accuracy | | | 0.794405 | 204864 |
| macro avg | 0.817000 | 0.786710 | 0.787201 | 204864 |
| weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 |
EN
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.800053 | 0.962981 | 0.873988 | 22151 |
| 1 | 0.945106 | 0.725899 | 0.821124 | 19449 |
| accuracy | | | 0.852139 | 41600 |
| macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 |
| weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 |
FR
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.746709 | 0.925738 | 0.826641 | 21505 |
| 1 | 0.887305 | 0.650592 | 0.750731 | 19327 |
| accuracy | | | 0.795504 | 40832 |
| macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 |
| weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 |
IT
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.721282 | 0.914669 | 0.806545 | 21528 |
| 1 | 0.864887 | 0.607135 | 0.713445 | 19368 |
| accuracy | | | 0.769024 | 40896 |
| macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 |
| weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 |
PT
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.717546 | 0.908167 | 0.801681 | 21637 |
| 1 | 0.853628 | 0.599700 | 0.704481 | 19323 |
| accuracy | | | 0.762646 | 40960 |
| macro avg | 0.785587 | 0.753933 | 0.753081 | 40960 |
| weighted avg | 0.781743 | 0.762646 | 0.755826 | 40960 |
## How to use
```python
from transformers import XLMRobertaTokenizerFast, XLMRobertaForSequenceClassification
# load tokenizer and model weights
tokenizer = XLMRobertaTokenizerFast.from_pretrained('s-nlp/xlmr_formality_classifier')
model = XLMRobertaForSequenceClassification.from_pretrained('s-nlp/xlmr_formality_classifier')
id2formality = {0: "formal", 1: "informal"}
texts = [
"I like you. I love you",
"Hey, what's up?",
"Siema, co porabiasz?",
"I feel deep regret and sadness about the situation in international politics.",
]
# prepare the input
encoding = tokenizer(
texts,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding="max_length",
return_tensors="pt",
)
# inference
output = model(**encoding)
formality_scores = [
{id2formality[idx]: score for idx, score in enumerate(text_scores.tolist())}
for text_scores in output.logits.softmax(dim=1)
]
formality_scores
```
```
[{'formal': 0.993225634098053, 'informal': 0.006774314679205418},
{'formal': 0.8807966113090515, 'informal': 0.1192033663392067},
{'formal': 0.936184287071228, 'informal': 0.06381577253341675},
{'formal': 0.9986615180969238, 'informal': 0.0013385231141000986}]
```
## Citation
```
@inproceedings{dementieva-etal-2023-detecting,
title = "Detecting Text Formality: A Study of Text Classification Approaches",
author = "Dementieva, Daryna and
Babakov, Nikolay and
Panchenko, Alexander",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.31",
pages = "274--284",
abstract = "Formality is one of the important characteristics of text documents. The automatic detection of the formality level of a text is potentially beneficial for various natural language processing tasks. Before, two large-scale datasets were introduced for multiple languages featuring formality annotation{---}GYAFC and X-FORMAL. However, they were primarily used for the training of style transfer models. At the same time, the detection of text formality on its own may also be a useful application. This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods and delivers the best-performing models for public usage. We conducted three types of experiments {--} monolingual, multilingual, and cross-lingual. The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task, while Transformer-based classifiers are more stable to cross-lingual knowledge transfer.",
}
```
## Licensing Information
This model is licensed under the OpenRAIL++ License, which supports the development of various technologies—both industrial and academic—that serve the public good.
|
s-nlp/mdistilbert-base-formality-ranker
|
s-nlp
| 2024-10-14T09:52:12Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"formality",
"en",
"fr",
"it",
"pt",
"arxiv:2104.04108",
"arxiv:1803.06535",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-30T19:32:42Z |
---
language:
- en
- fr
- it
- pt
tags:
- formality
licenses:
- cc-by-nc-sa
license: openrail++
base_model:
- distilbert/distilbert-base-multilingual-cased
---
**Model Overview**
This is the model presented in the paper ["Detecting Text Formality: A Study of Text Classification Approaches"](https://aclanthology.org/2023.ranlp-1.31/).
The original model is [mDistilBERT (base)](https://huggingface.co/distilbert-base-multilingual-cased). Then, it was fine-tuned on the multilingual corpus for fomality classiication [X-FORMAL](https://arxiv.org/abs/2104.04108) that consists of 4 languages -- English (from [GYAFC](https://arxiv.org/abs/1803.06535)), French, Italian, and Brazilian Portuguese.
In our experiments, the model showed the best results within Transformer-based models for the cross-lingual formality classification knowledge transfer task. More details, code and data can be found [here](https://github.com/s-nlp/formality).
**Evaluation Results**
Here, we provide several metrics of the best models from each category participated in the comparison to understand the ranks of values. We report accuracy score for two setups -- multilingual model fine-tuned for each language separately and then fine-tuned on all languages.
For cross-lingual experiments results, please, refer to the paper.
| | En | It | Po | Fr | All |
|------------------|------|------|------|------|-------|
| bag-of-words | 79.1 | 71.3 | 70.6 | 72.5 | --- |
| CharBiLSTM | 87.0 | 79.1 | 75.9 | 81.3 | 82.7 |
| mDistilBERT-cased| 86.6 | 76.8 | 75.9 | 79.1 | 79.4 |
| mDeBERTa-base | 87.3 | 76.6 | 75.8 | 78.9 | 79.9 |
**How to use**
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = 's-nlp/mdistilbert-base-formality-ranker'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
**Citation**
```
@inproceedings{dementieva-etal-2023-detecting,
title = "Detecting Text Formality: A Study of Text Classification Approaches",
author = "Dementieva, Daryna and
Babakov, Nikolay and
Panchenko, Alexander",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.31",
pages = "274--284",
abstract = "Formality is one of the important characteristics of text documents. The automatic detection of the formality level of a text is potentially beneficial for various natural language processing tasks. Before, two large-scale datasets were introduced for multiple languages featuring formality annotation{---}GYAFC and X-FORMAL. However, they were primarily used for the training of style transfer models. At the same time, the detection of text formality on its own may also be a useful application. This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods and delivers the best-performing models for public usage. We conducted three types of experiments {--} monolingual, multilingual, and cross-lingual. The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task, while Transformer-based classifiers are more stable to cross-lingual knowledge transfer.",
}
```
## Licensing Information
This model is licensed under the OpenRAIL++ License, which supports the development of various technologies—both industrial and academic—that serve the public good.
|
WizWhite/wizard-s-grimdark-the-gloom
|
WizWhite
| 2024-10-14T09:50:05Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"horror",
"dark souls",
"concept art",
"grimdark",
"gothic",
"dark fantasy",
"style",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T09:50:01Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- horror
- dark souls
- concept art
- grimdark
- gothic
- dark fantasy
- style
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Gloomy
widget:
- text: 'gloomy scene from dark fantasy, Maggots crawling from a cracked skull in a crypt'
parameters:
negative_prompt: with acceleration LoRA
output:
url: >-
32835079.jpeg
- text: 'gloomy grimdark fantasy poster. The ghost of wizard whitebeard with a long beard. Glowing ember trail spells out "The Gloom"'
output:
url: >-
32830814.jpeg
- text: 'gloomy dark fantasy movie poster. a messy teenager''s bedroom'
output:
url: >-
32831147.jpeg
- text: 'gloomy dark fantasy batman'
output:
url: >-
32831237.jpeg
- text: 'gloomy dark fantasy movie poster. Cinematic photo of the ghost of wizard whitebeard ,with a long beard. Glowing ember trail spells out "The Gloom"'
output:
url: >-
32831041.jpeg
- text: 'gloomy scene from a dark fantasy buddy-cop tv show featuring police officers from 1970, Cop and partner bonding over their backstories,'
output:
url: >-
32831386.jpeg
- text: 'gloomy scene from dark fantasy, Polaroid pictures revealing ghostly figures'
output:
url: >-
32831574.jpeg
- text: 'gloomy scene from dark fantasy, Breathing heard from under the bed, though no one’s there
'
output:
url: >-
32831902.jpeg
- text: 'gloomy scene from dark fantasy, Rotting flesh peeling away from a still-moving hand
'
output:
url: >-
32832171.jpeg
- text: 'gloomy scene from dark fantasy, Blood dripping from the ceiling of a darkened hallway'
output:
url: >-
32832279.jpeg
- text: 'gloomy scene, A rebel with a futuristic weapon, overlooking a dystopian cityscape.
'
output:
url: >-
32832340.jpeg
- text: 'gloomy scene from dark fantasy, Unseen force yanking someone’s legs from beneath the bed
'
output:
url: >-
32832477.jpeg
- text: 'gloomy scene from dark fantasy, Unseen force dragging someone screaming into the darkness
'
output:
url: >-
32832938.jpeg
---
# Wizard's Grimdark: The Gloom
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p><strong><span style="color:#868e96">Wizard's Grimdark: </span><span style="color:#fd7e14">The Gloom</span></strong><br />A dark fantasy LoRA trained on video game concept art images, most prominently from the Dark Souls triology. </p><p><span style="color:rgb(193, 194, 197)">Prompt inspiration: </span><code>Gloomy</code><span style="color:rgb(193, 194, 197)">, </span><code>ethereal</code><span style="color:rgb(193, 194, 197)">, </span><code>dark fantasy</code><span style="color:rgb(193, 194, 197)">, </span><code>horror</code><span style="color:rgb(193, 194, 197)">, </span><code>grimdark</code><span style="color:rgb(193, 194, 197)">, </span><code>mist</code><span style="color:rgb(193, 194, 197)">, </span><code>ghastly figure</code><span style="color:rgb(193, 194, 197)">, </span><code>concept art</code></p>
## Trigger words
You should use `Gloomy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-grimdark-the-gloom/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-grimdark-the-gloom', weight_name='Wiz-The_Gloom.safetensors')
image = pipeline('gloomy scene from dark fantasy, Unseen force dragging someone screaming into the darkness
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF
|
mradermacher
| 2024-10-14T09:49:05Z | 15 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-13T11:32:51Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF
|
mradermacher
| 2024-10-14T09:49:05Z | 117 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T07:55:06Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
WizWhite/wizard-s-pen-and-ink
|
WizWhite
| 2024-10-14T09:48:47Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"drawing",
"style",
"ink drawing",
"pen and ink drawing",
"fineliner",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T09:48:43Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- drawing
- style
- ink drawing
- pen and ink drawing
- fineliner
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ink drawing
widget:
- text: 'ink drawing, dark and gritty pen and ink realism with detailed linework, a haunting drawing of the ghost of wizard whitebead, His face is a skeletal undead skull, with hollow eye sockets glowing a faint, eerie light. His long, flowing white beard drifts as if caught in an unseen breeze. The wizard is dressed in ancient, tattered robes, background is dark, misty, with swirling shadows, emphasizing the spectral and otherworldly nature of the ghostly figure.
'
output:
url: >-
26937835.jpeg
- text: 'ink drawing, Phoenix bird guitarist kicking pebbles down the road in Office building, black and white drawing with stipple shading
'
output:
url: >-
26939710.jpeg
- text: 'ink drawing, Pen and Ink Realism with detailed linework, 90s retro anime portrait of a beetlejuice woman, standing in a graveyard on a moonlit night, wearing a beetlejuice inpired dress and beetlejuice makeup
'
output:
url: >-
26935357.jpeg
- text: 'ink drawing, a toy rocket ship, rocket launch, exhaust streaming out the bottom of the rocket, flames, hot gases and smoke'
output:
url: >-
26938069.jpeg
- text: 'black and white ink drawing of a scene from the animated movie Grave of the Fireflies, stippple and hatch technique'
output:
url: >-
26938189.jpeg
- text: 'ink drawing, giant pooh dressed as an intimidating samurai wearing manchira and kabuto, black and white drawing with stipple shading'
output:
url: >-
26939212.jpeg
- text: 'ink drawing, Pen and Ink Realism with detailed linework, portrait of a woman dressed as chewbacca, dark room background,'
output:
url: >-
26939485.jpeg
- text: ' '
output:
url: >-
26939619.jpeg
- text: 'ink drawing, Pen and Ink Realism with detailed linework, 90s retro anime portrait of a beetlejuice woman, standing in a graveyard on a moonlit night, wearing a beetlejuice inpired dress and beetlejuice makeup'
output:
url: >-
26937965.jpeg
- text: 'ink drawing with stipple shading, portrait of a afghan woman in her 30s'
output:
url: >-
26940386.jpeg
- text: 'watercolor and ink drawing, manatee soldier in the battle of stalingrad'
output:
url: >-
26942319.jpeg
- text: ' '
output:
url: >-
26942547.jpeg
- text: 'ink drawing, floating in empty space, reflections on the astronaut''s helmet visor, close-up, pen and ink realism with detailed linework with stipple and hatching'
output:
url: >-
26946111.jpeg
---
# Wizard's Pen and Ink
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Ink drawing style with stipple shading and hatching.</p><p>Trigger: <code>Ink drawing</code><span style="color:rgb(193, 194, 197)"> </span><br />Helpful <span style="color:rgb(193, 194, 197)">keyword:</span> <code>pen and ink realism,</code> <code>detailed linework,</code> <code>stipple shading,</code><span style="color:rgb(193, 194, 197)"> </span> <code>crosshatching,</code> <code>black and white ink drawing,</code> <code>detailed linework,</code> </p><p>Has a small tendency to be a bit horny because of the training images (blame Robert Crumb and Apollonia Saintclair).</p>
## Trigger words
You should use `ink drawing`, `pen and ink realism`, `stipple shading` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-pen-and-ink/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-pen-and-ink', weight_name='Wiz-PenAndInkDrawings.safetensors')
image = pipeline('ink drawing, floating in empty space, reflections on the astronaut's helmet visor, close-up, pen and ink realism with detailed linework with stipple and hatching').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
yee050201/distilgpt2-finetuned-wikitext2
|
yee050201
| 2024-10-14T09:45:23Z | 179 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T09:09:25Z |
---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7487 | 1.0 | 2334 | 3.6663 |
| 3.648 | 2.0 | 4668 | 3.6462 |
| 3.6015 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
arjunanand13/florence-enphaseall2-30e
|
arjunanand13
| 2024-10-14T09:45:13Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"image-text-to-text",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-13T11:00:24Z |
---
library_name: transformers
pipeline_tag: image-text-to-text
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andsedov/flat_picture_classifier
|
andsedov
| 2024-10-14T09:45:04Z | 223 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-07-31T14:22:57Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: flat_picture_classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8560311198234558
---
# flat_picture_classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### apartment bathroom

#### apartment bedroom

#### apartment dining room

#### apartment house facade

#### apartment in-house gym

#### apartment indoor pool

#### apartment kitchen

#### apartment living room

#### apartment outdoor pool

#### apartment study room

#### apartment terrace

#### apartment view

#### apartment yard

#### empty room

|
YaraKyrychenko/OrpoLlama-3-8B-Anthropic-EGA-18
|
YaraKyrychenko
| 2024-10-14T09:44:28Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T09:41:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oksjjj/xlm-roberta-base-finetuned-panx-all
|
oksjjj
| 2024-10-14T09:39:00Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-14T09:21:46Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- F1: 0.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.299 | 1.0 | 835 | 0.2074 | 0.8078 |
| 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 |
| 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
nick911/hermes-3-llama-3-1-405b-fp8
|
nick911
| 2024-10-14T09:36:01Z | 5 | 0 | null |
[
"safetensors",
"llama",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"en",
"arxiv:2408.11857",
"base_model:meta-llama/Llama-3.1-405B",
"base_model:finetune:meta-llama/Llama-3.1-405B",
"license:llama3",
"region:us"
] | null | 2024-10-14T09:36:00Z |
---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
base_model: meta-llama/Meta-Llama-3.1-405B
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-3-Llama-3.1-405B
results: []
---
# Hermes 3 - Llama-3.1 405B

## Model Description
Hermes 3 405B is the latest flagship model in the Hermes series of LLMs by Nous Research, and the first full parameter finetune since the release of Llama-3.1 405B.
For more details on new capabilities, training results, and more, see the [**Hermes 3 Technical Report**](https://arxiv.org/abs/2408.11857).
**This is the FP8 quantized version of Hermes 405B, for use with [vLLM](https://github.com/vllm-project/vllm). For the Full HF BF16 Model, click [here](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B)**
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.
Hermes 3 405B is a frontier level, full parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.
The Hermes 3 series builds and expands on the Hermes 3 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
Special thanks to **LambdaLabs** for their 1-Click Cluster access provided to us to train this model! [Check out LambdaLabs' 1CC Here](https://lambdalabs.com/service/gpu-cloud/1-click-clusters).
# Benchmarks
Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two.
Full benchmark comparisons below:

# Prompt Format
Hermes 3 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are Hermes 3, a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 3, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 3."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Inference
The Hermes 405B model requires over 800GB of VRAM to load in FP16, to remedy this, we have utilized NeuralMagic's FP8 quantization method to provide a pre-quantized model that fits only 430~GB of VRAM, and is compatible with the `VLLM` inference engine.
You can also load this FP16 model in `bitsandbytes` 8bit or 4bit with bitsandbytes using HuggingFace Transformers (not recommended, as it is slower), by setting load_in_4bit or 8bit like so:
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-3-Llama-3.1-405B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-3-Llama-3.1-405B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

## Quantized Versions:
NeuralMagic FP8 Quantization (for use with VLLM): https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B-FP8
# How to cite:
```bibtext
@misc{teknium2024hermes3technicalreport,
title={Hermes 3 Technical Report},
author={Ryan Teknium and Jeffrey Quesnelle and Chen Guang},
year={2024},
eprint={2408.11857},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11857},
}
```
|
ajrayman/Imagination_binary
|
ajrayman
| 2024-10-14T09:23:05Z | 11 | 0 | null |
[
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"region:us"
] | null | 2024-10-03T07:12:50Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Imagination_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Imagination_binary
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6415
- Accuracy: 0.6446
- Precision: 0.6859
- Recall: 0.5161
- F1: 0.5890
- Auc: 0.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| No log | 1.0 | 134 | 0.6696 | 0.5756 | 0.5467 | 0.8185 | 0.6556 | 0.5787 |
| No log | 2.0 | 268 | 0.6415 | 0.6446 | 0.6859 | 0.5161 | 0.5890 | 0.6429 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/GreenNode-small-9B-it-GGUF
|
mradermacher
| 2024-10-14T09:10:11Z | 27 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:GreenNode/GreenNode-small-9B-it",
"base_model:quantized:GreenNode/GreenNode-small-9B-it",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T08:00:46Z |
---
base_model: GreenNode/GreenNode-small-9B-it
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/GreenNode/GreenNode-small-9B-it
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GreenNode-small-9B-it-GGUF/resolve/main/GreenNode-small-9B-it.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adriansanz/ST-tramits-SB-001-5ep
|
adriansanz
| 2024-10-14T09:09:56Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2372",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-14T09:07:59Z |
---
base_model: BAAI/bge-m3
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2372
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Heu de veure si és necessari un estudi d'aïllament acústic i quin
nivell d'aïllament acústic precisa l'activitat.
sentences:
- Quin és el paper de les persones que resideixen amb el titular del dret d'habitatge
en la política d'habitatge?
- Quin és el límit de superfície per a les carpes informatives?
- Quin és l'objectiu de l'estudi d'aïllament acústic?
- source_sentence: 'Si us voleu matricular al proper curs 2022-2023 d''arts plàstiques
ho podeu fer a partir del 1 de juliol a les 16h, seleccionant una d''aquestes
opcions:'
sentences:
- Quin és el període de matrícula per al curs 2022-2023 d'arts plàstiques?
- Quan no cal presentar al·legacions en un expedient de baixa d'ofici?
- Quin és l'objectiu de les al·legacions respecte a un expedient sancionador de
l'Ordenança Municipal de Civisme i Convivència Ciutadana?
- source_sentence: Annexes Econòmics (Cooperació)
sentences:
- Qui és el responsable de l'elaboració de l'informe d'adequació de l'habitatge?
- Què han de fer les persones interessades durant el tràmit d'audiència en el procés
d'inclusió al registre municipal d'immobles desocupats?
- Quin és l'àmbit de la cooperació econòmica?
- source_sentence: En virtut del conveni de col.laboració amb l'Atrium de Viladecans,
tots els ciutadans que acreditin la seva residència a Viladecans es podran beneficiar
d'un 20% de descompte en la programació de teatre, música i dansa, objecte del
conveni.
sentences:
- Quin és el resultat de consultar un expedient d'activitats?
- Quin és el format de resposta d'aquesta sol·licitud?
- Quin és el descompte que s'aplica en la programació de teatre, música i dansa
per als ciutadans de Viladecans?
- source_sentence: Descripció. Retorna en format JSON adequat
sentences:
- Quin és el contingut de l'annex específic?
- Quin tipus d'ocupació es refereix a la renúncia de la llicència?
- Què passa amb l'habitatge?
model-index:
- name: SentenceTransformer based on BAAI/bge-m3
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 1024
type: dim_1024
metrics:
- type: cosine_accuracy@1
value: 0.33220910623946037
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5902192242833052
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6998313659359191
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8094435075885329
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.33220910623946037
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1967397414277684
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1399662731871838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08094435075885327
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.33220910623946037
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5902192242833052
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6998313659359191
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8094435075885329
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5625986746470664
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4843170320404718
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49243646079034575
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.3406408094435076
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5767284991568297
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6981450252951096
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8161888701517707
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3406408094435076
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19224283305227655
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1396290050590219
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08161888701517706
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3406408094435076
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5767284991568297
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6981450252951096
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8161888701517707
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5661348054508011
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4872065633448428
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49520736709122076
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.3305227655986509
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5801011804384486
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6947723440134908
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8161888701517707
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3305227655986509
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19336706014614952
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.13895446880269813
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08161888701517707
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3305227655986509
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5801011804384486
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6947723440134908
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8161888701517707
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5629643418278626
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4829913809256133
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49079988310494693
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.3288364249578415
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5885328836424958
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7015177065767285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8094435075885329
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3288364249578415
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1961776278808319
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14030354131534567
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08094435075885327
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3288364249578415
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5885328836424958
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7015177065767285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8094435075885329
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5625842077927447
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.48416981182579805
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49201787335851555
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.3473861720067454
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.581787521079258
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6998313659359191
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.806070826306914
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3473861720067454
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19392917369308602
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1399662731871838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0806070826306914
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3473861720067454
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.581787521079258
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6998313659359191
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.806070826306914
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.565365572327355
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4893626703070211
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49726527073459287
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.2917369308600337
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5682967959527825
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6644182124789207
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7875210792580101
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2917369308600337
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18943226531759413
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.13288364249578413
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07875210792580102
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2917369308600337
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5682967959527825
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6644182124789207
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7875210792580101
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5320349463938843
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.45117106988945077
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.45948574441166834
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/ST-tramits-SB-001-5ep")
# Run inference
sentences = [
'Descripció. Retorna en format JSON adequat',
"Quin és el contingut de l'annex específic?",
"Què passa amb l'habitatge?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_1024`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3322 |
| cosine_accuracy@3 | 0.5902 |
| cosine_accuracy@5 | 0.6998 |
| cosine_accuracy@10 | 0.8094 |
| cosine_precision@1 | 0.3322 |
| cosine_precision@3 | 0.1967 |
| cosine_precision@5 | 0.14 |
| cosine_precision@10 | 0.0809 |
| cosine_recall@1 | 0.3322 |
| cosine_recall@3 | 0.5902 |
| cosine_recall@5 | 0.6998 |
| cosine_recall@10 | 0.8094 |
| cosine_ndcg@10 | 0.5626 |
| cosine_mrr@10 | 0.4843 |
| **cosine_map@100** | **0.4924** |
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3406 |
| cosine_accuracy@3 | 0.5767 |
| cosine_accuracy@5 | 0.6981 |
| cosine_accuracy@10 | 0.8162 |
| cosine_precision@1 | 0.3406 |
| cosine_precision@3 | 0.1922 |
| cosine_precision@5 | 0.1396 |
| cosine_precision@10 | 0.0816 |
| cosine_recall@1 | 0.3406 |
| cosine_recall@3 | 0.5767 |
| cosine_recall@5 | 0.6981 |
| cosine_recall@10 | 0.8162 |
| cosine_ndcg@10 | 0.5661 |
| cosine_mrr@10 | 0.4872 |
| **cosine_map@100** | **0.4952** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3305 |
| cosine_accuracy@3 | 0.5801 |
| cosine_accuracy@5 | 0.6948 |
| cosine_accuracy@10 | 0.8162 |
| cosine_precision@1 | 0.3305 |
| cosine_precision@3 | 0.1934 |
| cosine_precision@5 | 0.139 |
| cosine_precision@10 | 0.0816 |
| cosine_recall@1 | 0.3305 |
| cosine_recall@3 | 0.5801 |
| cosine_recall@5 | 0.6948 |
| cosine_recall@10 | 0.8162 |
| cosine_ndcg@10 | 0.563 |
| cosine_mrr@10 | 0.483 |
| **cosine_map@100** | **0.4908** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.3288 |
| cosine_accuracy@3 | 0.5885 |
| cosine_accuracy@5 | 0.7015 |
| cosine_accuracy@10 | 0.8094 |
| cosine_precision@1 | 0.3288 |
| cosine_precision@3 | 0.1962 |
| cosine_precision@5 | 0.1403 |
| cosine_precision@10 | 0.0809 |
| cosine_recall@1 | 0.3288 |
| cosine_recall@3 | 0.5885 |
| cosine_recall@5 | 0.7015 |
| cosine_recall@10 | 0.8094 |
| cosine_ndcg@10 | 0.5626 |
| cosine_mrr@10 | 0.4842 |
| **cosine_map@100** | **0.492** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3474 |
| cosine_accuracy@3 | 0.5818 |
| cosine_accuracy@5 | 0.6998 |
| cosine_accuracy@10 | 0.8061 |
| cosine_precision@1 | 0.3474 |
| cosine_precision@3 | 0.1939 |
| cosine_precision@5 | 0.14 |
| cosine_precision@10 | 0.0806 |
| cosine_recall@1 | 0.3474 |
| cosine_recall@3 | 0.5818 |
| cosine_recall@5 | 0.6998 |
| cosine_recall@10 | 0.8061 |
| cosine_ndcg@10 | 0.5654 |
| cosine_mrr@10 | 0.4894 |
| **cosine_map@100** | **0.4973** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2917 |
| cosine_accuracy@3 | 0.5683 |
| cosine_accuracy@5 | 0.6644 |
| cosine_accuracy@10 | 0.7875 |
| cosine_precision@1 | 0.2917 |
| cosine_precision@3 | 0.1894 |
| cosine_precision@5 | 0.1329 |
| cosine_precision@10 | 0.0788 |
| cosine_recall@1 | 0.2917 |
| cosine_recall@3 | 0.5683 |
| cosine_recall@5 | 0.6644 |
| cosine_recall@10 | 0.7875 |
| cosine_ndcg@10 | 0.532 |
| cosine_mrr@10 | 0.4512 |
| **cosine_map@100** | **0.4595** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 2,372 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 35.12 tokens</li><li>max: 166 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 19.49 tokens</li><li>max: 47 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| <code>Comunicar la variació d'alguna de les següents dades del Padró Municipal d'Habitants: Nom, Cognoms, Data de naixement, DNI, Passaport, Número de permís de residència (NIE), Sexe, Municipi i/o província de naixement, Nacionalitat, Titulació acadèmica.</code> | <code>Quin és l'objectiu del canvi de dades personals en el Padró Municipal d'Habitants?</code> |
| <code>EN QUÈ CONSISTEIX: Tramitar la sol·licitud de matrimoni civil a l'Ajuntament.</code> | <code>Què és el matrimoni civil a l'Ajuntament de Sant Boi de Llobregat?</code> |
| <code>En domiciliar el pagament de tributs municipals en entitats bancàries.</code> | <code>Quin és el benefici de domiciliar el pagament de tributs?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.2
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_1024_cosine_map@100 | dim_768_cosine_map@100 | dim_512_cosine_map@100 | dim_256_cosine_map@100 | dim_128_cosine_map@100 | dim_64_cosine_map@100 |
|:----------:|:------:|:-------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.9664 | 9 | - | 0.4730 | 0.4766 | 0.4640 | 0.4612 | 0.4456 | 0.4083 |
| 1.0738 | 10 | 2.6023 | - | - | - | - | - | - |
| 1.9329 | 18 | - | 0.4951 | 0.4966 | 0.4977 | 0.4773 | 0.4849 | 0.4501 |
| 2.1477 | 20 | 0.974 | - | - | - | - | - | - |
| 2.8993 | 27 | - | 0.4891 | 0.4973 | 0.4941 | 0.4867 | 0.4925 | 0.4684 |
| 3.2215 | 30 | 0.408 | - | - | - | - | - | - |
| **3.9732** | **37** | **-** | **0.4944** | **0.4998** | **0.4931** | **0.4991** | **0.4974** | **0.4616** |
| 4.2953 | 40 | 0.2718 | - | - | - | - | - | - |
| 4.8322 | 45 | - | 0.4924 | 0.4952 | 0.4908 | 0.4920 | 0.4973 | 0.4595 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 1.1.0.dev0
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf
|
RichardErkhov
| 2024-10-14T08:59:04Z | 60 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-13T20:34:57Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Open_Gpt4_8x7B_v0.1 - GGUF
- Model creator: https://huggingface.co/rombodawg/
- Original model: https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Open_Gpt4_8x7B_v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q2_K.gguf) | Q2_K | 16.12GB |
| [Open_Gpt4_8x7B_v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [Open_Gpt4_8x7B_v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [Open_Gpt4_8x7B_v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [Open_Gpt4_8x7B_v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [Open_Gpt4_8x7B_v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q3_K.gguf) | Q3_K | 21.0GB |
| [Open_Gpt4_8x7B_v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [Open_Gpt4_8x7B_v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [Open_Gpt4_8x7B_v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [Open_Gpt4_8x7B_v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q4_0.gguf) | Q4_0 | 24.63GB |
| [Open_Gpt4_8x7B_v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [Open_Gpt4_8x7B_v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [Open_Gpt4_8x7B_v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q4_K.gguf) | Q4_K | 26.49GB |
| [Open_Gpt4_8x7B_v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [Open_Gpt4_8x7B_v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q4_1.gguf) | Q4_1 | 27.32GB |
| [Open_Gpt4_8x7B_v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q5_0.gguf) | Q5_0 | 30.02GB |
| [Open_Gpt4_8x7B_v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [Open_Gpt4_8x7B_v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q5_K.gguf) | Q5_K | 30.95GB |
| [Open_Gpt4_8x7B_v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [Open_Gpt4_8x7B_v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q5_1.gguf) | Q5_1 | 32.71GB |
| [Open_Gpt4_8x7B_v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/blob/main/Open_Gpt4_8x7B_v0.1.Q6_K.gguf) | Q6_K | 35.74GB |
| [Open_Gpt4_8x7B_v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/rombodawg_-_Open_Gpt4_8x7B_v0.1-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
license: cc-by-4.0
tags:
- merge
- moe
---
Open_Gpt4
cc
____________________________________________________________________________________________
VERSION 0.2 OUT NOW:
Fp16:
- https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2
q8_0.gguf:
- https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf
____________________________________________________________________________________________

This model is a TIES merger of notux-8x7b-v1 and UNAversal-8x7B-v1beta with MixtralOrochi8x7B being the Base model.
I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct,
Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊
Base model:
- https://huggingface.co/smelborp/MixtralOrochi8x7B
Merged models:
- https://huggingface.co/fblgit/UNAversal-8x7B-v1beta
- https://huggingface.co/argilla/notux-8x7b-v1
Instruct template: Alpaca
Merger config:
```yaml
models:
- model: notux-8x7b-v1
parameters:
density: .5
weight: 1
- model: UNAversal-8x7B-v1beta
parameters:
density: .5
weight: 1
merge_method: ties
base_model: MixtralOrochi8x7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
anirban1612/bert-finetuned-ner
|
anirban1612
| 2024-10-14T08:54:48Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-14T08:08:43Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: anirban1612/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# anirban1612/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0272
- Validation Loss: 0.0535
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1824 | 0.0706 | 0 |
| 0.0468 | 0.0517 | 1 |
| 0.0272 | 0.0535 | 2 |
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.