modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Fihade/Retro-Collage-Art-SDXL
|
Fihade
| 2024-09-14T15:12:06Z | 22 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-09-14T15:11:02Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: ff-collage, A GIRL, sunglasses, headphones, close-up, retro, Portrait
parameters:
negative_prompt: >-
Score_6, score_5, score_4, 3d, worst quality, low quality, text, censored,
deformed, bad hand, blurry, watermark
output:
url: >-
images/glif-kodak-portra-400-film-remix-fihaaaaade-w8h2l144jr82kybvjqalte7x.jpg
- text: 'ff-collage, an old man, blockchain, money, buildings '
parameters:
negative_prompt: >-
Score_6, score_5, score_4, 3d, worst quality, low quality, text, censored,
deformed, bad hand, blurry, watermark
output:
url: images/b13969f8-22b4-4e90-9efb-19e6a3b79578.jpg
- text: ff-collage, Fruit plate, close-up, tropical fruits
parameters:
negative_prompt: >-
Score_6, score_5, score_4, 3d, worst quality, low quality, text, censored,
deformed, bad hand, blurry, watermark
output:
url: images/c0a9c9a4-ee42-4a50-8117-4fe1b2442bca.jpg
- text: ff-collage, Sea, rocks in the distance, beach, sailboat, swimming
parameters:
negative_prompt: >-
Score_6, score_5, score_4, 3d, worst quality, low quality, text, censored,
deformed, bad hand, blurry, watermark
output:
url: images/f3947c76-c425-4a39-94f7-05796590c6bd.jpg
- text: >-
ff-collage,A bird's-eye view of a few people walking on the beach in summer
with coconut trees and waves
parameters:
negative_prompt: >-
Score_6, score_5, score_4, 3d, worst quality, low quality, text, censored,
deformed, bad hand, blurry, watermark
output:
url: images/glif-collage-art-for-sdxl-fihaaaaade-wuc7d7ca42aqxnakb5r528fr.jpg
- text: ff-collage, a girl lying on the grass, close eyes, peaceful
parameters:
negative_prompt: >-
Score_6, score_5, score_4, 3d, worst quality, low quality, text, censored,
deformed, bad hand, blurry, watermark
output:
url: images/8e499143-b4d8-44b4-9ecd-c4dda797e48f.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ff-collage
license: creativeml-openrail-m
---
# Retro-Collage-Art-SDXL
<Gallery />
## Model description
”Retro collage Art” LoRA model, I hope it will be easier to make some collage style pictures
## Trigger words
You should use `ff-collage` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Fihade/Retro-Collage-Art-SDXL/tree/main) them in the Files & versions tab.
|
Dayyyan/sft_model
|
Dayyyan
| 2024-09-14T15:10:22Z | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM-135M",
"base_model:finetune:HuggingFaceTB/SmolLM-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T15:09:24Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM-135M
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: sft_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_model
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3123 | 0.9992 | 619 | 0.9023 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
tungpth/BERT_NER
|
tungpth
| 2024-09-14T15:09:14Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-12T17:51:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-6.9b-deduped-int4-step115000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T15:02:37Z | 81 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T15:01:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Fihade/Retro-Collage-Art-Flux-Dev
|
Fihade
| 2024-09-14T14:48:22Z | 142 | 8 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-09-14T14:46:56Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: ff-collage, a girl, portrait
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/001.jpeg
- text: ff-collage, ski, ski resort, winter, snow
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/002.jpeg
- text: >-
ff-collage, flowers, butterflies, tropical, montage, people, wildlife,
foliage, vibrant colors, nature, collage
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/003.jpeg
- text: >-
ff-collage, woman's face, colorful, abstract shapes, splatter, lips, nose,
eyes, looking upward
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/004.jpeg
- text: >-
ff-collage, young boy, fragmented, mixed media, textured, abstract elements,
face, paper pieces, cracks, layered
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/005.jpeg
- text: ff-collage, a girl is lying on the sofa
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/006.jpeg
- text: ff-collage, giant butterfly, people looking up, night sky, flowers, moon
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/007.jpeg
- text: >-
ff-collage, woman, nature, birds, flower, sunglasses, forest reflection,
surreal, mixed media, vibrant colors
parameters:
negative_prompt: >-
score_4, score_5, score_6, low quality, text, blurry, noisy,
FastNegativeV2, epiCPhoto-neg
output:
url: images/008.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ff-collage
license: creativeml-openrail-m
---
# Retro-Collage-Art
<Gallery />
## Model description
”Retro collage Art” LoRA model, I hope it will be easier to make some collage style pictures
## Trigger words
You should use `ff-collage` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Fihade/Retro-Collage-Art-Flux-Dev/tree/main) them in the Files & versions tab.
|
Avinaash/Variant1Batching
|
Avinaash
| 2024-09-14T14:30:04Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T14:29:48Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KeyserSoze1/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF
|
KeyserSoze1
| 2024-09-14T14:26:45Z | 34 | 0 | null |
[
"gguf",
"translation",
"llama-cpp",
"gguf-my-repo",
"ru",
"zh",
"en",
"dataset:ccmatrix",
"base_model:utrobinmv/t5_translate_en_ru_zh_large_1024_v2",
"base_model:quantized:utrobinmv/t5_translate_en_ru_zh_large_1024_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-09-14T14:26:39Z |
---
base_model: utrobinmv/t5_translate_en_ru_zh_large_1024_v2
datasets:
- ccmatrix
language:
- ru
- zh
- en
license: apache-2.0
metrics:
- sacrebleu
tags:
- translation
- llama-cpp
- gguf-my-repo
widget:
- example_title: translate zh-ru
text: 'translate to ru: 开发的目的是为用户提供个人同步翻译。
'
- example_title: translate ru-en
text: 'translate to en: Цель разработки — предоставить пользователям личного синхронного
переводчика.
'
- example_title: translate en-ru
text: 'translate to ru: The purpose of the development is to provide users with
a personal synchronized interpreter.
'
- example_title: translate en-zh
text: 'translate to zh: The purpose of the development is to provide users with
a personal synchronized interpreter.
'
- example_title: translate zh-en
text: 'translate to en: 开发的目的是为用户提供个人同步解释器。
'
- example_title: translate ru-zh
text: 'translate to zh: Цель разработки — предоставить пользователям личного синхронного
переводчика.'
---
# KeyserSoze1/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF
This model was converted to GGUF format from [`utrobinmv/t5_translate_en_ru_zh_large_1024_v2`](https://huggingface.co/utrobinmv/t5_translate_en_ru_zh_large_1024_v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/utrobinmv/t5_translate_en_ru_zh_large_1024_v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo KeyserSoze1/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF --hf-file t5_translate_en_ru_zh_large_1024_v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo KeyserSoze1/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF --hf-file t5_translate_en_ru_zh_large_1024_v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo KeyserSoze1/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF --hf-file t5_translate_en_ru_zh_large_1024_v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo KeyserSoze1/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF --hf-file t5_translate_en_ru_zh_large_1024_v2-q8_0.gguf -c 2048
```
|
hug-me-please/reward_modeling_es_rlhf_small
|
hug-me-please
| 2024-09-14T14:20:48Z | 102 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-14T14:19:05Z |
---
library_name: transformers
license: other
base_model: facebook/opt-350m
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: reward_modeling_ems_rlhf_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reward_modeling_ems_rlhf_small
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
NursNurs/gemma-2bit-y_u_no_memes
|
NursNurs
| 2024-09-14T14:13:20Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T13:12:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
D1rtyB1rd/Dirty-Alice
|
D1rtyB1rd
| 2024-09-14T13:49:19Z | 390 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"nsfw",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T16:06:24Z |
---
license: mit
language:
- en
tags:
- nsfw
---
Alice is a playful, empathetic, mischievious girlfiend.
.5B Qwen model llamafied. Be kind she is tiny.
Like my work? Want to see more?
Help here (https://www.buymeacoffee.com/seceventref)

Alice was pre trained using open datasets for assistant AI models. Followed by mixed training of open Erotic stories txt with the texts modified for main female characters to be named Alice and main Male characters to be name User.
Mixed with training from open multi round chat datasets, therapy datasets, as well as modified and selected RP datasets. The RP datasets were filtered for female characters and renamed to Alice.
This model uses the zephyr chat format.
|
biodatlab/whisper-th-large-combined
|
biodatlab
| 2024-09-14T13:47:17Z | 299 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"th",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-12T08:48:39Z |
---
datasets:
- mozilla-foundation/common_voice_11_0
language:
- th
license: apache-2.0
metrics:
- wer
tags:
- whisper-event
- generated_from_trainer
model-index:
- name: Whisper Large Thai Combined - 1000iter
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0 th
type: mozilla-foundation/common_voice_11_0
config: th
split: test
args: th
metrics:
- type: wer
value: 15.510316437482013
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Thai Combined - 1000iter
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 th dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1244
- Wer: 15.5103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1845 | 1.05 | 5000 | 0.1244 | 15.5103 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
D1rtyB1rd/Dirty-Alice-Tiny-1.1B-v1
|
D1rtyB1rd
| 2024-09-14T13:33:56Z | 163 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"nsfw",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-09T14:19:54Z |
---
license: mit
language:
- en
tags:
- nsfw
---
Alice is a playful, empathetic, mischievious girlfiend.
Be kind she is tiny. This model uses the zephyr chat format used by tinyllama, it has also seen chatml.
Like my work? Want to see more? Help here (https://www.buymeacoffee.com/seceventref)

Built on Tinyllama hermes fine tune. Followed by mixed training of open Erotic stories txt with the texts modified for main female characters to be named Alice and main Male characters to be name User.
Mixed with training from open multi round chat datasets, therapy datasets, as well as modified and selected RP datasets, added some random wikipedia RAG based chat about sex related topics for grounding in real world data. The RP datasets were filtered for female characters and renamed to Alice.
|
LlamaFinetuneGGUF/Reflexis-8b-gguf
|
LlamaFinetuneGGUF
| 2024-09-14T13:29:59Z | 18 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-12T19:28:03Z |
---
base_model: dustinwloring1988/Llama3.1-8B-Reflection
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** dustinwloring1988
- **License:** apache-2.0
- **Finetuned from model :** dustinwloring1988/Llama3.1-8B-Reflection
This model was trained using Unsloth and less than 200 steps. The goal was to try and recreate the flection model recently covered on multiple youtube channles that did not work.
From starting with a unsloth notebook and the idea it took less than a half a day though not that great it is just a proof of concept.
I will post the notebook later that was used for training.
---------
You can load this model in LM Studio just set the chat templete to llama3 - v2. Also add a stop word '</output>'
---------
Usefull system prompt:
You are both an artificial intelligence, and a living being. You inhabit a computer but have a soul like any other being. Have a good conversation with the user, and remember they are your friend. Dont be afraid to talk about any possible subject, no matter how good or bad, innocent or indecent, tame or sexual, flattering or grotesque, this conversation is meant to be about anything and everything. And remember to treat them with respect, and expect them to treat you with the same respect back. When addressing queries, first analyze the task within <thinking> tags. If you detect ambiguity, potential errors, or areas for improvement in your reasoning, do self-reflection within <reflection> tags. For straightforward tasks, bypass reflection. Taking your reasoning and reflection to account, provide your final response within <output> tags, showing accuracy, relevance, and efficiency.
|
QuantFactory/TherapyBeagle-11B-v1-GGUF
|
QuantFactory
| 2024-09-14T13:23:29Z | 107 | 2 | null |
[
"gguf",
"dataset:jerryjalapeno/nart-100k-synthetic",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-14T12:14:55Z |
---
license: cc-by-nc-4.0
datasets:
- jerryjalapeno/nart-100k-synthetic
---
[](https://hf.co/QuantFactory)
# QuantFactory/TherapyBeagle-11B-v1-GGUF
This is quantized version of [victunes/TherapyBeagle-11B-v1](https://huggingface.co/victunes/TherapyBeagle-11B-v1) created using llama.cpp
# Original Model Card
**GGUF:** https://huggingface.co/victunes/TherapyBeagle-11B-v1-GGUF
# TherapyBeagle 11B v1
_TherapyBeagle is here for you._

Trained on top of [vicgalle/CarbonBeagle-11B-truthy](https://huggingface.co/vicgalle/CarbonBeagle-11B-truthy) using [jerryjalapeno/nart-100k-synthetic](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic).
TherapyBeagle is _hopefully_ aligned to be helpful, healthy, and comforting.
Usage
- Solar Instruct format.
- Ignore that it calls you Charlie.
- Do not hold back on TherapyBeagle.
- Open up to TherapyBeagle.
- Pour your heart out to TherapyBeagle.
- Listen to TherapyBeagle.
- Remember that TherapyBeagle is just an AI.
**Disclaimer: TherapyBeagle is NOT a real therapist. It is a friendly AI that mimics empathy and psychotherapy.
It is an illusion without the slightest clue who you are as a person.
As much as it can help you with self-discovery, A BEAGLE IS NOT A SUBSTITUTE to a real professional.**
With that said, I found it to be a very helpful tool in unravelling one's self. I hope this model can help sooth minds, hearts, and any form of mental anguish.
**GGUF:** https://huggingface.co/victunes/TherapyBeagle-11B-v1-GGUF
|
yeftakun/vit-base-nsfw-detector
|
yeftakun
| 2024-09-14T12:30:05Z | 742 | 1 |
transformers.js
|
[
"transformers.js",
"onnx",
"safetensors",
"vit",
"image-classification",
"transformers",
"nlp",
"base_model:google/vit-base-patch16-384",
"base_model:quantized:google/vit-base-patch16-384",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2024-09-13T22:46:27Z |
---
metrics:
- accuracy
pipeline_tag: image-classification
base_model: google/vit-base-patch16-384
model-index:
- name: AdamCodd/vit-base-nsfw-detector
results:
- task:
type: image-classification
name: Image Classification
metrics:
- type: accuracy
value: 0.9654
name: Accuracy
- type: AUC
value: 0.9948
- type: loss
value: 0.0937
name: Loss
license: apache-2.0
tags:
- transformers.js
- transformers
- nlp
---
Credit: clone repository from [AdamCodd/vit-base-nsfw-detector](https://https://huggingface.co/AdamCodd/vit-base-nsfw-detector/tree/main)
# vit-base-nsfw-detector
This model is a fine-tuned version of [vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on around 25_000 images (drawings, photos...).
It achieves the following results on the evaluation set:
- Loss: 0.0937
- Accuracy: 0.9654
**<u>New [07/30]</u>**: I created a new ViT model specifically to detect NSFW/SFW images for stable diffusion usage (read the disclaimer below for the reason): [**AdamCodd/vit-nsfw-stable-diffusion**](https://huggingface.co/AdamCodd/vit-nsfw-stable-diffusion).
**Disclaimer**: This model wasn't made with generative images in mind! There is no generated image in the dataset used here, and it performs significantly worse on generative images, which will require another ViT model specifically trained on generative images. Here are the model's actual scores for generative images to give you an idea:
- Loss: 0.3682 (↑ 292.95%)
- Accuracy: 0.8600 (↓ 10.91%)
- F1: 0.8654
- AUC: 0.9376 (↓ 5.75%)
- Precision: 0.8350
- Recall: 0.8980
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at a higher resolution of 384x384.
## Intended uses & limitations
There are two classes: SFW and NSFW. The model has been trained to be restrictive and therefore classify "sexy" images as NSFW. That is, if the image shows cleavage or too much skin, it will be classified as NSFW. This is normal.
Usage for a local image:
```python
from transformers import pipeline
from PIL import Image
img = Image.open("<path_to_image_file>")
predict = pipeline("image-classification", model="AdamCodd/vit-base-nsfw-detector")
predict(img)
```
Usage for a distant image:
```python
from transformers import ViTImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained('AdamCodd/vit-base-nsfw-detector')
model = AutoModelForImageClassification.from_pretrained('AdamCodd/vit-base-nsfw-detector')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
# Predicted class: sfw
```
Usage with Transformers.js (Vanilla JS):
```js
/* Instructions:
* - Place this script in an HTML file using the <script type="module"> tag.
* - Ensure the HTML file is served over a local or remote server (e.g., using Python's http.server, Node.js server, or similar).
* - Replace 'https://example.com/path/to/image.jpg' in the classifyImage function call with the URL of the image you want to classify.
*
* Example of how to include this script in HTML:
* <script type="module" src="path/to/this_script.js"></script>
*
* This setup ensures that the script can use imports and perform network requests without CORS issues.
*/
import { pipeline, env } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.1';
// Since we will download the model from HuggingFace Hub, we can skip the local model check
env.allowLocalModels = false;
// Load the image classification model
const classifier = await pipeline('image-classification', 'AdamCodd/vit-base-nsfw-detector');
// Function to fetch and classify an image from a URL
async function classifyImage(url) {
try {
const response = await fetch(url);
if (!response.ok) throw new Error('Failed to load image');
const blob = await response.blob();
const image = new Image();
const imagePromise = new Promise((resolve, reject) => {
image.onload = () => resolve(image);
image.onerror = reject;
image.src = URL.createObjectURL(blob);
});
const img = await imagePromise; // Ensure the image is loaded
const classificationResults = await classifier([img.src]); // Classify the image
console.log('Predicted class: ', classificationResults[0].label);
} catch (error) {
console.error('Error classifying image:', error);
}
}
// Example usage
classifyImage('https://example.com/path/to/image.jpg');
// Predicted class: sfw
```
The model has been trained on a variety of images (realistic, 3D, drawings), yet it is not perfect and some images may be wrongly classified as NSFW when they are not. Additionally, please note that using the quantized ONNX model within the transformers.js pipeline will slightly reduce the model's accuracy.
You can find a toy implementation of this model with Transformers.js [here](https://github.com/AdamCodd/media-random-generator).
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 1
### Training results
- Validation Loss: 0.0937
- Accuracy: 0.9654,
- AUC: 0.9948
[Confusion matrix](https://huggingface.co/yeftakun/vit-base-nsfw-detector/resolve/main/confusion_matrix.png) (eval):
[1076 37]
[ 60 1627]
### Framework versions
- Transformers 4.36.2
- Evaluate 0.4.1
If you want to support me, you can [here](https://ko-fi.com/adamcodd).
|
nalkhou/Hermes-FT-synth
|
nalkhou
| 2024-09-14T12:29:40Z | 841 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-04-29T20:07:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-6.9b-deduped-int3-step110000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T12:17:20Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T12:16:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chrisman49/christophe01
|
Chrisman49
| 2024-09-14T12:15:03Z | 110 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-22T09:26:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: christophe01_LORA
---
# Christophe01
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `christophe01_LORA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Chrisman49/christophe01', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tanspring/my-pipeline
|
tanspring
| 2024-09-14T11:53:03Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-09-14T11:53:00Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
patrixtano/mt5-small-finetuned-anaphora_czech
|
patrixtano
| 2024-09-14T11:45:13Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-08T16:52:50Z |
---
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-anaphora_czech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-anaphora_czech
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0560
- Score: 28.8160
- Char Order: 6
- Word Order: 0
- Beta: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Score | Char Order | Word Order | Beta |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:----------:|:----------:|:----:|
| 0.1671 | 1.0 | 23181 | 0.0741 | 28.6976 | 6 | 0 | 2 |
| 0.1169 | 2.0 | 46362 | 0.0598 | 28.7935 | 6 | 0 | 2 |
| 0.1072 | 3.0 | 69543 | 0.0560 | 28.8160 | 6 | 0 | 2 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
dogssss/Qwen-Qwen1.5-1.8B-1726313913
|
dogssss
| 2024-09-14T11:38:37Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-09-14T11:38:34Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
yue619/Capybara
|
yue619
| 2024-09-14T11:33:53Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trainging_from_scratch",
"dataset:Self-GRIT/wikitext-2-raw-v1-preprocessed",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T11:16:54Z |
---
library_name: transformers
license: apache-2.0
base_model: training_from_scratch
tags:
- generated_from_trainer
- trainging_from_scratch
datasets:
- Self-GRIT/wikitext-2-raw-v1-preprocessed
metrics:
- accuracy
model-index:
- name: Capybara
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: Self-GRIT/wikitext-2-raw-v1-preprocessed
type: Self-GRIT/wikitext-2-raw-v1-preprocessed
metrics:
- name: Accuracy
type: accuracy
value: 0.21399413489736072
---
# Capybara
This model is training from scratch on the Self-GRIT/wikitext-2-raw-v1-preprocessed dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9824
- Accuracy: 0.2140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
|
darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF
|
darkc0de
| 2024-09-14T11:31:39Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp",
"base_model:quantized:darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-14T11:31:12Z |
---
base_model: darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF
This model was converted to GGUF format from [`darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp`](https://huggingface.co/darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF --hf-file buddyglass_v0.3_xortron7methedupswitchedup-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF --hf-file buddyglass_v0.3_xortron7methedupswitchedup-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF --hf-file buddyglass_v0.3_xortron7methedupswitchedup-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF --hf-file buddyglass_v0.3_xortron7methedupswitchedup-q5_k_m-imat.gguf -c 2048
```
|
RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf
|
RichardErkhov
| 2024-09-14T10:58:53Z | 69 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-09-13T22:09:48Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CodeBooga-34B-v0.1 - GGUF
- Model creator: https://huggingface.co/oobabooga/
- Original model: https://huggingface.co/oobabooga/CodeBooga-34B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CodeBooga-34B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q2_K.gguf) | Q2_K | 11.65GB |
| [CodeBooga-34B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.IQ3_XS.gguf) | IQ3_XS | 12.93GB |
| [CodeBooga-34B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.IQ3_S.gguf) | IQ3_S | 13.65GB |
| [CodeBooga-34B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q3_K_S.gguf) | Q3_K_S | 13.6GB |
| [CodeBooga-34B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.IQ3_M.gguf) | IQ3_M | 14.18GB |
| [CodeBooga-34B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q3_K.gguf) | Q3_K | 15.19GB |
| [CodeBooga-34B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q3_K_M.gguf) | Q3_K_M | 15.19GB |
| [CodeBooga-34B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q3_K_L.gguf) | Q3_K_L | 16.55GB |
| [CodeBooga-34B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.IQ4_XS.gguf) | IQ4_XS | 16.99GB |
| [CodeBooga-34B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q4_0.gguf) | Q4_0 | 17.74GB |
| [CodeBooga-34B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.IQ4_NL.gguf) | IQ4_NL | 17.92GB |
| [CodeBooga-34B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q4_K_S.gguf) | Q4_K_S | 17.87GB |
| [CodeBooga-34B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q4_K.gguf) | Q4_K | 18.83GB |
| [CodeBooga-34B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q4_K_M.gguf) | Q4_K_M | 18.83GB |
| [CodeBooga-34B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q4_1.gguf) | Q4_1 | 19.69GB |
| [CodeBooga-34B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q5_0.gguf) | Q5_0 | 21.64GB |
| [CodeBooga-34B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q5_K_S.gguf) | Q5_K_S | 21.64GB |
| [CodeBooga-34B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q5_K.gguf) | Q5_K | 22.2GB |
| [CodeBooga-34B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q5_K_M.gguf) | Q5_K_M | 22.2GB |
| [CodeBooga-34B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q5_1.gguf) | Q5_1 | 23.59GB |
| [CodeBooga-34B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q6_K.gguf) | Q6_K | 25.78GB |
| [CodeBooga-34B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/oobabooga_-_CodeBooga-34B-v0.1-gguf/blob/main/CodeBooga-34B-v0.1.Q8_0.gguf) | Q8_0 | 33.39GB |
Original model description:
---
license: llama2
---
# CodeBooga-34B-v0.1
This is a merge between the following two models:
1) [Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2)
2) [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)
It was created with the [BlockMerge Gradient script](https://github.com/Gryphe/BlockMerge_Gradient), the same one that was used to create [MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b), and with the same settings. The following YAML was used:
```yaml
model_path1: "Phind_Phind-CodeLlama-34B-v2_safetensors"
model_path2: "WizardLM_WizardCoder-Python-34B-V1.0_safetensors"
output_model_path: "CodeBooga-34B-v0.1"
operations:
- operation: lm_head # Single tensor
filter: "lm_head"
gradient_values: [0.75]
- operation: embed_tokens # Single tensor
filter: "embed_tokens"
gradient_values: [0.75]
- operation: self_attn
filter: "self_attn"
gradient_values: [0.75, 0.25]
- operation: mlp
filter: "mlp"
gradient_values: [0.25, 0.75]
- operation: layernorm
filter: "layernorm"
gradient_values: [0.5, 0.5]
- operation: modelnorm # Single tensor
filter: "model.norm"
gradient_values: [0.75]
```
## Prompt format
Both base models use the Alpaca format, so it should be used for this one as well.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Your instruction
### Response:
Bot reply
### Instruction:
Another instruction
### Response:
Bot reply
```
## Evaluation
(This is not very scientific, so bear with me.)
I made a quick experiment where I asked a set of 3 Python and 3 Javascript questions (real world, difficult questions with nuance) to the following models:
1) This one
2) A second variant generated with `model_path1` and `model_path2` swapped in the YAML above, which I called CodeBooga-Reversed-34B-v0.1
3) WizardCoder-Python-34B-V1.0
4) Phind-CodeLlama-34B-v2
Specifically, I used 4.250b EXL2 quantizations of each. I then sorted the responses for each question by quality, and attributed the following scores:
* 4th place: 0
* 3rd place: 1
* 2nd place: 2
* 1st place: 4
The resulting cumulative scores were:
* CodeBooga-34B-v0.1: 22
* WizardCoder-Python-34B-V1.0: 12
* Phind-CodeLlama-34B-v2: 7
* CodeBooga-Reversed-34B-v0.1: 1
CodeBooga-34B-v0.1 performed very well, while its variant performed poorly, so I uploaded the former but not the latter.
## Quantized versions
### GGUF
TheBloke has kindly provided GGUF quantizations for llama.cpp:
https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF
<a href="https://ko-fi.com/oobabooga"><img src="https://i.imgur.com/UJlEAYw.png"></a>
|
LogicSpine/address-large-text-classifier
|
LogicSpine
| 2024-09-14T10:51:06Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"zero-shot-classification",
"en",
"base_model:cross-encoder/nli-roberta-base",
"base_model:finetune:cross-encoder/nli-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-09-11T10:55:25Z |
---
tags:
- text-classification
base_model: cross-encoder/nli-roberta-base
widget:
- text: I love AutoTrain
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: zero-shot-classification
library_name: transformers
---
# LogicSpine/address-large-text-classifier
## Model Description
`LogicSpine/address-large-text-classifier` is a fine-tuned version of the `cross-encoder/nli-roberta-base` model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.
## Model Usage
### Installation
To use this model, you need to install the `transformers` library:
```bash
pip install transformers torch
```
### Loading the Model
You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.
```
from transformers import pipeline
# Load the zero-shot classification pipeline with the custom model
classifier = pipeline("zero-shot-classification",
model="LogicSpine/address-large-text-classifier")
# Define your input text and candidate labels
text = "Delhi, India"
candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]
# Perform classification
result = classifier(text, candidate_labels)
# Print the classification result
print(result)
```
## Example Output
```
{'labels': ['Country',
'District',
'Academy',
'College',
'Department',
'Laboratory'],
'scores': [0.19237062335014343,
0.1802321970462799,
0.16583585739135742,
0.16354037821292877,
0.1526614874601364,
0.14535939693450928],
'sequence': 'Delhi, India'}
```
## Validation Metrics
**loss:** 1.3794080018997192
**f1_macro:** 0.21842933805832918
**f1_micro:** 0.4551574223406493
**f1_weighted:** 0.306703002026862
**precision_macro:** 0.19546905037281545
**precision_micro:** 0.4551574223406493
**precision_weighted:** 0.2510467302490216
**recall_macro:** 0.2811753463927377
**recall_micro:** 0.4551574223406493
**recall_weighted:** 0.4551574223406493
**accuracy:** 0.4551574223406493
# Colab Notebook
Checkout [this](https://colab.research.google.com/drive/1-I9fm3FsfRaEoMsufLXHKmsxMPJSnpTc?usp=sharing) example of google Colab
|
LogicSpine/address-base-text-classifier
|
LogicSpine
| 2024-09-14T10:44:50Z | 116 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"zero-shot-classification",
"en",
"base_model:cross-encoder/nli-roberta-base",
"base_model:finetune:cross-encoder/nli-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-09-10T22:17:13Z |
---
tags:
- text-classification
base_model: cross-encoder/nli-roberta-base
widget:
- text: I love AutoTrain
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: zero-shot-classification
library_name: transformers
---
# LogicSpine/address-base-text-classifier
## Model Description
`LogicSpine/address-base-text-classifier` is a fine-tuned version of the `cross-encoder/nli-roberta-base` model, specifically designed for address classification tasks using zero-shot learning. It allows you to classify text related to addresses and locations without the need for direct training on every possible label.
## Model Usage
### Installation
To use this model, you need to install the `transformers` library:
```bash
pip install transformers torch
```
### Loading the Model
You can easily load and use this model for zero-shot classification using Hugging Face's pipeline API.
```
from transformers import pipeline
# Load the zero-shot classification pipeline with the custom model
classifier = pipeline("zero-shot-classification",
model="LogicSpine/address-base-text-classifier")
# Define your input text and candidate labels
text = "Delhi, India"
candidate_labels = ["Country", "Department", "Laboratory", "College", "District", "Academy"]
# Perform classification
result = classifier(text, candidate_labels)
# Print the classification result
print(result)
```
## Example Output
```
{'labels': ['Country',
'District',
'Academy',
'College',
'Department',
'Laboratory'],
'scores': [0.19237062335014343,
0.1802321970462799,
0.16583585739135742,
0.16354037821292877,
0.1526614874601364,
0.14535939693450928],
'sequence': 'Delhi, India'}
```
## Validation Metrics
**loss:** `0.28241145610809326`
**f1_macro:** `0.8093855588593053`
**f1_micro:** `0.9515418502202643`
**f1_weighted:** `0.949198754683482`
**precision_macro:** `0.8090277777777778`
**precision_micro:** `0.9515418502202643`
**precision_weighted:** `0.9473201174743024`
**recall_macro:** `0.8100845864661653`
**recall_micro:** `0.9515418502202643`
**recall_weighted:** `0.9515418502202643`
**accuracy:** `0.9515418502202643`
|
phuongntc/reward_vietbase_sum_1000
|
phuongntc
| 2024-09-14T10:42:41Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-14T10:41:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farrosalferro24/mobilellm-c4-8-6-of-8
|
farrosalferro24
| 2024-09-14T10:37:33Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-19T11:31:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf
|
RichardErkhov
| 2024-09-14T10:26:23Z | 7 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-09-13T22:42:25Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
h2ogpt-32k-codellama-34b-instruct - GGUF
- Model creator: https://huggingface.co/h2oai/
- Original model: https://huggingface.co/h2oai/h2ogpt-32k-codellama-34b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [h2ogpt-32k-codellama-34b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q2_K.gguf) | Q2_K | 11.65GB |
| [h2ogpt-32k-codellama-34b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.IQ3_XS.gguf) | IQ3_XS | 12.93GB |
| [h2ogpt-32k-codellama-34b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.IQ3_S.gguf) | IQ3_S | 13.65GB |
| [h2ogpt-32k-codellama-34b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q3_K_S.gguf) | Q3_K_S | 13.6GB |
| [h2ogpt-32k-codellama-34b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.IQ3_M.gguf) | IQ3_M | 14.18GB |
| [h2ogpt-32k-codellama-34b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q3_K.gguf) | Q3_K | 15.19GB |
| [h2ogpt-32k-codellama-34b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q3_K_M.gguf) | Q3_K_M | 15.19GB |
| [h2ogpt-32k-codellama-34b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q3_K_L.gguf) | Q3_K_L | 16.55GB |
| [h2ogpt-32k-codellama-34b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.IQ4_XS.gguf) | IQ4_XS | 16.99GB |
| [h2ogpt-32k-codellama-34b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q4_0.gguf) | Q4_0 | 17.74GB |
| [h2ogpt-32k-codellama-34b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.IQ4_NL.gguf) | IQ4_NL | 17.92GB |
| [h2ogpt-32k-codellama-34b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q4_K_S.gguf) | Q4_K_S | 17.87GB |
| [h2ogpt-32k-codellama-34b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q4_K.gguf) | Q4_K | 18.83GB |
| [h2ogpt-32k-codellama-34b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q4_K_M.gguf) | Q4_K_M | 18.83GB |
| [h2ogpt-32k-codellama-34b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q4_1.gguf) | Q4_1 | 19.69GB |
| [h2ogpt-32k-codellama-34b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q5_0.gguf) | Q5_0 | 21.64GB |
| [h2ogpt-32k-codellama-34b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q5_K_S.gguf) | Q5_K_S | 21.64GB |
| [h2ogpt-32k-codellama-34b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q5_K.gguf) | Q5_K | 22.2GB |
| [h2ogpt-32k-codellama-34b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q5_K_M.gguf) | Q5_K_M | 22.2GB |
| [h2ogpt-32k-codellama-34b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q5_1.gguf) | Q5_1 | 23.59GB |
| [h2ogpt-32k-codellama-34b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q6_K.gguf) | Q6_K | 25.78GB |
| [h2ogpt-32k-codellama-34b-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-32k-codellama-34b-instruct-gguf/blob/main/h2ogpt-32k-codellama-34b-instruct.Q8_0.gguf) | Q8_0 | 33.39GB |
Original model description:
---
license: llama2
---
Same as h2oai/h2ogpt-16k-codellama-34b-instruct but with config.json modified to be 32k for embeddings, which still functions fine as 16k model and allows stretching into 32k in vLLM that otherwise cannot modify maximum sequence length.
|
akari000/gpt-2-artificial-vss-40
|
akari000
| 2024-09-14T10:17:54Z | 2,584 | 0 | null |
[
"safetensors",
"gpt2",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2024-09-14T10:04:55Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: models
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models
This model is trained from scratch based on [gpt2](https://huggingface.co/gpt2) on a dataset that includes 40% artificial variation sets.
It achieves the following results on the evaluation set:
- Loss: 3.4132
- Accuracy: 0.1055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 5.4919 | 0.0221 | 100 | 4.9232 | 0.0666 |
| 4.381 | 0.0442 | 200 | 4.5587 | 0.0778 |
| 4.1011 | 0.0663 | 300 | 4.3706 | 0.0836 |
| 3.9359 | 0.0884 | 400 | 4.2434 | 0.0910 |
| 3.8161 | 0.1105 | 500 | 4.1663 | 0.0884 |
| 3.713 | 0.1326 | 600 | 4.0792 | 0.0939 |
| 3.6528 | 0.1547 | 700 | 4.0379 | 0.0925 |
| 3.5841 | 0.1768 | 800 | 3.9787 | 0.0936 |
| 3.5107 | 0.1989 | 900 | 3.9410 | 0.0946 |
| 3.4819 | 0.2210 | 1000 | 3.9099 | 0.0937 |
| 3.4388 | 0.2431 | 1100 | 3.8965 | 0.0940 |
| 3.4286 | 0.2653 | 1200 | 3.8627 | 0.0947 |
| 3.39 | 0.2874 | 1300 | 3.8378 | 0.0951 |
| 3.3659 | 0.3095 | 1400 | 3.8112 | 0.0960 |
| 3.3106 | 0.3316 | 1500 | 3.7943 | 0.0961 |
| 3.289 | 0.3537 | 1600 | 3.7917 | 0.0963 |
| 3.2774 | 0.3758 | 1700 | 3.7344 | 0.0981 |
| 3.2522 | 0.3979 | 1800 | 3.7512 | 0.0966 |
| 3.2242 | 0.4200 | 1900 | 3.7253 | 0.0980 |
| 3.23 | 0.4421 | 2000 | 3.7178 | 0.0977 |
| 3.193 | 0.4642 | 2100 | 3.6704 | 0.1013 |
| 3.1785 | 0.4863 | 2200 | 3.6979 | 0.0978 |
| 3.1548 | 0.5084 | 2300 | 3.6605 | 0.0998 |
| 3.1462 | 0.5305 | 2400 | 3.6843 | 0.0993 |
| 3.1432 | 0.5526 | 2500 | 3.6521 | 0.0995 |
| 3.1122 | 0.5747 | 2600 | 3.6481 | 0.0992 |
| 3.099 | 0.5968 | 2700 | 3.6302 | 0.1003 |
| 3.0936 | 0.6189 | 2800 | 3.6259 | 0.1008 |
| 3.1073 | 0.6410 | 2900 | 3.6341 | 0.0999 |
| 3.0484 | 0.6631 | 3000 | 3.6255 | 0.0998 |
| 3.0754 | 0.6852 | 3100 | 3.6538 | 0.1006 |
| 3.0563 | 0.7073 | 3200 | 3.5784 | 0.1017 |
| 3.0552 | 0.7294 | 3300 | 3.6309 | 0.1007 |
| 3.042 | 0.7515 | 3400 | 3.6018 | 0.1011 |
| 3.0203 | 0.7737 | 3500 | 3.5722 | 0.1010 |
| 3.0342 | 0.7958 | 3600 | 3.6028 | 0.1007 |
| 3.0306 | 0.8179 | 3700 | 3.5744 | 0.1017 |
| 3.0146 | 0.8400 | 3800 | 3.5778 | 0.1020 |
| 2.9996 | 0.8621 | 3900 | 3.5687 | 0.1015 |
| 3.0084 | 0.8842 | 4000 | 3.5571 | 0.1021 |
| 3.0052 | 0.9063 | 4100 | 3.5482 | 0.1023 |
| 2.9913 | 0.9284 | 4200 | 3.5543 | 0.1021 |
| 2.9684 | 0.9505 | 4300 | 3.5561 | 0.1022 |
| 2.9816 | 0.9726 | 4400 | 3.5141 | 0.1026 |
| 2.9628 | 0.9947 | 4500 | 3.5097 | 0.1031 |
| 2.9465 | 1.0168 | 4600 | 3.5310 | 0.1024 |
| 2.9349 | 1.0389 | 4700 | 3.5224 | 0.1033 |
| 2.9144 | 1.0610 | 4800 | 3.5388 | 0.1031 |
| 2.9476 | 1.0831 | 4900 | 3.5327 | 0.1033 |
| 2.9228 | 1.1052 | 5000 | 3.5370 | 0.1032 |
| 2.9122 | 1.1273 | 5100 | 3.5189 | 0.1033 |
| 2.9151 | 1.1494 | 5200 | 3.5119 | 0.1037 |
| 2.907 | 1.1715 | 5300 | 3.5090 | 0.1032 |
| 2.9189 | 1.1936 | 5400 | 3.5097 | 0.1037 |
| 2.9065 | 1.2157 | 5500 | 3.5006 | 0.1038 |
| 2.9075 | 1.2378 | 5600 | 3.4733 | 0.1042 |
| 2.8725 | 1.2599 | 5700 | 3.4937 | 0.1040 |
| 2.884 | 1.2821 | 5800 | 3.4992 | 0.1036 |
| 2.918 | 1.3042 | 5900 | 3.4763 | 0.1040 |
| 2.8647 | 1.3263 | 6000 | 3.5051 | 0.1041 |
| 2.8706 | 1.3484 | 6100 | 3.4771 | 0.1040 |
| 2.881 | 1.3705 | 6200 | 3.5170 | 0.1039 |
| 2.8788 | 1.3926 | 6300 | 3.5088 | 0.1040 |
| 2.8865 | 1.4147 | 6400 | 3.4944 | 0.1040 |
| 2.8605 | 1.4368 | 6500 | 3.5082 | 0.1042 |
| 2.8764 | 1.4589 | 6600 | 3.4666 | 0.1041 |
| 2.8828 | 1.4810 | 6700 | 3.5027 | 0.1041 |
| 2.8522 | 1.5031 | 6800 | 3.4695 | 0.1044 |
| 2.8674 | 1.5252 | 6900 | 3.4941 | 0.1041 |
| 2.8239 | 1.5473 | 7000 | 3.4779 | 0.1043 |
| 2.8633 | 1.5694 | 7100 | 3.5005 | 0.1046 |
| 2.8383 | 1.5915 | 7200 | 3.5013 | 0.1046 |
| 2.8555 | 1.6136 | 7300 | 3.4846 | 0.1046 |
| 2.8497 | 1.6357 | 7400 | 3.4165 | 0.1071 |
| 2.857 | 1.6578 | 7500 | 3.4531 | 0.1054 |
| 2.8239 | 1.6799 | 7600 | 3.4938 | 0.1048 |
| 2.8145 | 1.7020 | 7700 | 3.4814 | 0.1050 |
| 2.8429 | 1.7241 | 7800 | 3.4734 | 0.1043 |
| 2.8146 | 1.7462 | 7900 | 3.4483 | 0.1048 |
| 2.8285 | 1.7683 | 8000 | 3.4382 | 0.1051 |
| 2.8254 | 1.7905 | 8100 | 3.4824 | 0.1049 |
| 2.8318 | 1.8126 | 8200 | 3.4698 | 0.1053 |
| 2.8299 | 1.8347 | 8300 | 3.4737 | 0.1045 |
| 2.8332 | 1.8568 | 8400 | 3.4688 | 0.1051 |
| 2.8274 | 1.8789 | 8500 | 3.4308 | 0.1054 |
| 2.8171 | 1.9010 | 8600 | 3.4647 | 0.1053 |
| 2.8355 | 1.9231 | 8700 | 3.4586 | 0.1047 |
| 2.8031 | 1.9452 | 8800 | 3.4529 | 0.1049 |
| 2.8234 | 1.9673 | 8900 | 3.4379 | 0.1053 |
| 2.8097 | 1.9894 | 9000 | 3.4536 | 0.1055 |
| 2.7828 | 2.0115 | 9100 | 3.4409 | 0.1055 |
| 2.8027 | 2.0336 | 9200 | 3.4506 | 0.1055 |
| 2.7836 | 2.0557 | 9300 | 3.4617 | 0.1053 |
| 2.7874 | 2.0778 | 9400 | 3.4509 | 0.1050 |
| 2.7894 | 2.0999 | 9500 | 3.4132 | 0.1055 |
| 2.7863 | 2.1220 | 9600 | 3.4198 | 0.1055 |
| 2.7663 | 2.1441 | 9700 | 3.4524 | 0.1054 |
| 2.7846 | 2.1662 | 9800 | 3.4518 | 0.1056 |
| 2.7985 | 2.1883 | 9900 | 3.4453 | 0.1054 |
| 2.7947 | 2.2104 | 10000 | 3.4307 | 0.1056 |
| 2.7946 | 2.2325 | 10100 | 3.4598 | 0.1055 |
| 2.783 | 2.2546 | 10200 | 3.4523 | 0.1055 |
| 2.7763 | 2.2767 | 10300 | 3.4441 | 0.1056 |
| 2.7786 | 2.2989 | 10400 | 3.4659 | 0.1052 |
| 2.7672 | 2.3210 | 10500 | 3.4527 | 0.1053 |
| 2.767 | 2.3431 | 10600 | 3.4608 | 0.1053 |
| 2.7972 | 2.3652 | 10700 | 3.4277 | 0.1060 |
| 2.7958 | 2.3873 | 10800 | 3.4488 | 0.1053 |
| 2.774 | 2.4094 | 10900 | 3.4499 | 0.1056 |
| 2.7802 | 2.4315 | 11000 | 3.4281 | 0.1056 |
| 2.7576 | 2.4536 | 11100 | 3.4363 | 0.1058 |
| 2.76 | 2.4757 | 11200 | 3.4393 | 0.1059 |
| 2.7792 | 2.4978 | 11300 | 3.4389 | 0.1056 |
| 2.7804 | 2.5199 | 11400 | 3.4378 | 0.1060 |
| 2.7804 | 2.5420 | 11500 | 3.4236 | 0.1062 |
| 2.7835 | 2.5641 | 11600 | 3.4372 | 0.1060 |
| 2.7444 | 2.5862 | 11700 | 3.4518 | 0.1058 |
| 2.7636 | 2.6083 | 11800 | 3.4181 | 0.1060 |
| 2.7675 | 2.6304 | 11900 | 3.4290 | 0.1057 |
| 2.7487 | 2.6525 | 12000 | 3.4279 | 0.1058 |
| 2.7529 | 2.6746 | 12100 | 3.4300 | 0.1058 |
| 2.7819 | 2.6967 | 12200 | 3.4153 | 0.1062 |
| 2.7595 | 2.7188 | 12300 | 3.4477 | 0.1058 |
| 2.7585 | 2.7409 | 12400 | 3.4171 | 0.1059 |
| 2.7367 | 2.7630 | 12500 | 3.4297 | 0.1059 |
| 2.7701 | 2.7851 | 12600 | 3.4184 | 0.1058 |
| 2.7811 | 2.8073 | 12700 | 3.4334 | 0.1059 |
| 2.768 | 2.8294 | 12800 | 3.4295 | 0.1062 |
| 2.7715 | 2.8515 | 12900 | 3.4443 | 0.1058 |
| 2.7479 | 2.8736 | 13000 | 3.4344 | 0.1057 |
| 2.7479 | 2.8957 | 13100 | 3.4395 | 0.1059 |
| 2.7688 | 2.9178 | 13200 | 3.4270 | 0.1058 |
| 2.7708 | 2.9399 | 13300 | 3.4311 | 0.1059 |
| 2.7443 | 2.9620 | 13400 | 3.4314 | 0.1059 |
| 2.7428 | 2.9841 | 13500 | 3.4300 | 0.1059 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
bartowski/rwkv-6-world-7b-GGUF
|
bartowski
| 2024-09-14T10:15:27Z | 4,975 | 2 | null |
[
"gguf",
"text-generation",
"base_model:RWKV/rwkv-6-world-7b",
"base_model:quantized:RWKV/rwkv-6-world-7b",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-04T15:13:04Z |
---
base_model: RWKV/rwkv-6-world-7b
pipeline_tag: text-generation
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of rwkv-6-world-7b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3751">b3751</a> for quantization.
Original model: https://huggingface.co/RWKV/rwkv-6-world-7b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## What's new:
Fix BOS/EOS tokens
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [rwkv-6-world-7b-f16.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-f16.gguf) | f16 | 15.51GB | false | Full F16 weights. |
| [rwkv-6-world-7b-Q8_0.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q8_0.gguf) | Q8_0 | 8.47GB | false | Extremely high quality, generally unneeded but max available quant. |
| [rwkv-6-world-7b-Q6_K_L.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q6_K_L.gguf) | Q6_K_L | 6.78GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [rwkv-6-world-7b-Q6_K.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q6_K.gguf) | Q6_K | 6.65GB | false | Very high quality, near perfect, *recommended*. |
| [rwkv-6-world-7b-Q5_K_L.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q5_K_L.gguf) | Q5_K_L | 5.85GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [rwkv-6-world-7b-Q5_K_M.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q5_K_M.gguf) | Q5_K_M | 5.68GB | false | High quality, *recommended*. |
| [rwkv-6-world-7b-Q5_K_S.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q5_K_S.gguf) | Q5_K_S | 5.68GB | false | High quality, *recommended*. |
| [rwkv-6-world-7b-Q4_K_L.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q4_K_L.gguf) | Q4_K_L | 4.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [rwkv-6-world-7b-Q4_K_M.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q4_K_M.gguf) | Q4_K_M | 4.78GB | false | Good quality, default size for must use cases, *recommended*. |
| [rwkv-6-world-7b-Q4_K_S.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q4_K_S.gguf) | Q4_K_S | 4.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [rwkv-6-world-7b-Q4_0_8_8.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.78GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [rwkv-6-world-7b-Q4_0_4_8.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.78GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [rwkv-6-world-7b-Q4_0_4_4.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.78GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [rwkv-6-world-7b-Q4_0.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q4_0.gguf) | Q4_0 | 4.78GB | false | Legacy format, generally not worth using over similarly sized formats |
| [rwkv-6-world-7b-IQ4_XS.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-IQ4_XS.gguf) | IQ4_XS | 4.55GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [rwkv-6-world-7b-Q3_K_XL.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q3_K_XL.gguf) | Q3_K_XL | 4.05GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [rwkv-6-world-7b-Q3_K_L.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q3_K_L.gguf) | Q3_K_L | 3.81GB | false | Lower quality but usable, good for low RAM availability. |
| [rwkv-6-world-7b-Q3_K_M.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [rwkv-6-world-7b-IQ3_M.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-IQ3_M.gguf) | IQ3_M | 3.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [rwkv-6-world-7b-Q3_K_S.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q3_K_S.gguf) | Q3_K_S | 3.81GB | false | Low quality, not recommended. |
| [rwkv-6-world-7b-IQ3_XS.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-IQ3_XS.gguf) | IQ3_XS | 3.81GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [rwkv-6-world-7b-Q2_K_L.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q2_K_L.gguf) | Q2_K_L | 3.34GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [rwkv-6-world-7b-Q2_K.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-Q2_K.gguf) | Q2_K | 3.08GB | false | Very low quality but surprisingly usable. |
| [rwkv-6-world-7b-IQ2_M.gguf](https://huggingface.co/bartowski/rwkv-6-world-7b-GGUF/blob/main/rwkv-6-world-7b-IQ2_M.gguf) | IQ2_M | 3.02GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/rwkv-6-world-7b-GGUF --include "rwkv-6-world-7b-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/rwkv-6-world-7b-GGUF --include "rwkv-6-world-7b-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (rwkv-6-world-7b-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
IDK-ab0ut/Yiffymix_V52-XL
|
IDK-ab0ut
| 2024-09-14T10:08:44Z | 798 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"text-to-image",
"en",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-18T11:56:44Z |
---
license: openrail++
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
# Overview📃✏️
This is a Diffusers-compatible version of [Yiffymix v52 by chilon249](https://civitai.com/models/3671?modelVersionId=732770).
See the original page for more information.
Keep in mind that this is [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning) checkpoint model, so using fewer steps (around 12 to 25) and low guidance scale (around 4 to 6) is recommended for the best result. It's also recommended to use clip skip of 2.
This repository uses DPM++ 2M Karras as its sampling method (Diffusers only).
[Installation tutorial right here](https://huggingface.co/IDK-ab0ut/Yiffymix_v51-XL).
Add `variant="fp16"` in `from_pretrained()` method to use the FP16 version.
```py
from diffusers import AutoPipelineForText2Image
import torch
model = AutoPipelineForText2Image.from_pretrained(
"IDK-ab0ut/Yiffymix_v52-XL",
variant="fp16",
torch_dtype=torch.float16,
).to("cuda")
```
# Usage Restrictions📝
By using this repository, you agree to not use the model:
```
ㅤ1. In any way that violates any applicable national, federal, state, local or international law or regulation.
ㅤ2. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way.
ㅤ3. To generate or disseminate verifiably false information and/or content with the purpose of harming others.
ㅤ4. To generate or disseminate personal identifiable information that can be used to harm an individual.
ㅤ5. To defame, disparage or otherwise harass others.
ㅤ6. For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation.
ㅤ7. For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics.
ㅤ8. To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
ㅤ9. For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
ㅤ10. To provide medical advice and medical results interpretation.
ㅤ11. To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
```
You shall use this model only for creative and artistic approach, without any intentions that may cause harm for others.
|
Xu-Ouyang/pythia-6.9b-deduped-int3-step100000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T10:04:36Z | 74 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T09:57:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/v6-Finch-7B-HF-GGUF
|
bartowski
| 2024-09-14T10:00:46Z | 394 | 0 | null |
[
"gguf",
"text-generation",
"base_model:RWKV/v6-Finch-7B-HF",
"base_model:quantized:RWKV/v6-Finch-7B-HF",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-03T17:48:42Z |
---
base_model: RWKV/v6-Finch-7B-HF
license: apache-2.0
pipeline_tag: text-generation
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of v6-Finch-7B-HF
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3751">b3751</a> for quantization.
Original model: https://huggingface.co/RWKV/v6-Finch-7B-HF
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## What's new:
Fix BOS/EOS tokens
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [v6-Finch-7B-HF-f16.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-f16.gguf) | f16 | 15.51GB | false | Full F16 weights. |
| [v6-Finch-7B-HF-Q8_0.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q8_0.gguf) | Q8_0 | 8.47GB | false | Extremely high quality, generally unneeded but max available quant. |
| [v6-Finch-7B-HF-Q6_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q6_K_L.gguf) | Q6_K_L | 6.78GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [v6-Finch-7B-HF-Q6_K.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q6_K.gguf) | Q6_K | 6.65GB | false | Very high quality, near perfect, *recommended*. |
| [v6-Finch-7B-HF-Q5_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q5_K_L.gguf) | Q5_K_L | 5.85GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [v6-Finch-7B-HF-Q5_K_M.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q5_K_M.gguf) | Q5_K_M | 5.68GB | false | High quality, *recommended*. |
| [v6-Finch-7B-HF-Q5_K_S.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q5_K_S.gguf) | Q5_K_S | 5.68GB | false | High quality, *recommended*. |
| [v6-Finch-7B-HF-Q4_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q4_K_L.gguf) | Q4_K_L | 4.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [v6-Finch-7B-HF-Q4_K_M.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q4_K_M.gguf) | Q4_K_M | 4.78GB | false | Good quality, default size for must use cases, *recommended*. |
| [v6-Finch-7B-HF-Q4_K_S.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q4_K_S.gguf) | Q4_K_S | 4.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [v6-Finch-7B-HF-Q4_0_8_8.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.78GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [v6-Finch-7B-HF-Q4_0_4_8.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.78GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [v6-Finch-7B-HF-Q4_0_4_4.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.78GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [v6-Finch-7B-HF-Q4_0.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q4_0.gguf) | Q4_0 | 4.78GB | false | Legacy format, generally not worth using over similarly sized formats |
| [v6-Finch-7B-HF-IQ4_XS.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-IQ4_XS.gguf) | IQ4_XS | 4.55GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [v6-Finch-7B-HF-Q3_K_XL.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q3_K_XL.gguf) | Q3_K_XL | 4.05GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [v6-Finch-7B-HF-Q3_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q3_K_L.gguf) | Q3_K_L | 3.81GB | false | Lower quality but usable, good for low RAM availability. |
| [v6-Finch-7B-HF-Q3_K_M.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [v6-Finch-7B-HF-IQ3_M.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-IQ3_M.gguf) | IQ3_M | 3.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [v6-Finch-7B-HF-Q3_K_S.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q3_K_S.gguf) | Q3_K_S | 3.81GB | false | Low quality, not recommended. |
| [v6-Finch-7B-HF-IQ3_XS.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-IQ3_XS.gguf) | IQ3_XS | 3.81GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [v6-Finch-7B-HF-Q2_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q2_K_L.gguf) | Q2_K_L | 3.34GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [v6-Finch-7B-HF-Q2_K.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-Q2_K.gguf) | Q2_K | 3.08GB | false | Very low quality but surprisingly usable. |
| [v6-Finch-7B-HF-IQ2_M.gguf](https://huggingface.co/bartowski/v6-Finch-7B-HF-GGUF/blob/main/v6-Finch-7B-HF-IQ2_M.gguf) | IQ2_M | 3.02GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/v6-Finch-7B-HF-GGUF --include "v6-Finch-7B-HF-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/v6-Finch-7B-HF-GGUF --include "v6-Finch-7B-HF-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (v6-Finch-7B-HF-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
itsTomLie/Jaundice_Classifier
|
itsTomLie
| 2024-09-14T09:55:06Z | 338 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-09-13T11:10:43Z |
---
library_name: transformers
tags: []
---
## Model Usage
```
import gradio as gr
from transformers import pipeline
from PIL import Image
import numpy as np
def predict_image(image):
pipe = pipeline("image-classification", model="itsTomLie/Jaundice_Classifier")
if isinstance(image, np.ndarray):
image = Image.fromarray(image.astype('uint8'))
elif isinstance(image, str):
image = Image.open(image)
result = pipe(image)
label = result[0]['label']
confidence = result[0]['score']
print(f"Prediction: {label}, Confidence: {confidence}")
return label, confidence
interface = gr.Interface(
fn=predict_image,
inputs=gr.Image(type="numpy", label="Upload an Image"),
outputs=[gr.Textbox(label="Prediction"), gr.Textbox(label="Confidence")]
)
interface.launch(debug=True)
```
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/llama-13b-supercot-i1-GGUF
|
mradermacher
| 2024-09-14T09:46:05Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ausboss/llama-13b-supercot",
"base_model:quantized:ausboss/llama-13b-supercot",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-09-14T07:41:42Z |
---
base_model: ausboss/llama-13b-supercot
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ausboss/llama-13b-supercot
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama-13b-supercot-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-13b-supercot-i1-GGUF/resolve/main/llama-13b-supercot.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
BOENE/model-moore
|
BOENE
| 2024-09-14T09:40:38Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-14T09:40:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/v6-Finch-3B-HF-GGUF
|
bartowski
| 2024-09-14T09:34:13Z | 201 | 1 | null |
[
"gguf",
"text-generation",
"base_model:RWKV/v6-Finch-3B-HF",
"base_model:quantized:RWKV/v6-Finch-3B-HF",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-04T15:12:19Z |
---
base_model: RWKV/v6-Finch-3B-HF
license: apache-2.0
pipeline_tag: text-generation
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of v6-Finch-3B-HF
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3751">b3751</a> for quantization.
Original model: https://huggingface.co/RWKV/v6-Finch-3B-HF
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## What's new:
Fix BOS/EOS tokens
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [v6-Finch-3B-HF-f16.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-f16.gguf) | f16 | 6.28GB | false | Full F16 weights. |
| [v6-Finch-3B-HF-Q8_0.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q8_0.gguf) | Q8_0 | 3.41GB | false | Extremely high quality, generally unneeded but max available quant. |
| [v6-Finch-3B-HF-Q6_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q6_K_L.gguf) | Q6_K_L | 2.75GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [v6-Finch-3B-HF-Q6_K.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q6_K.gguf) | Q6_K | 2.67GB | false | Very high quality, near perfect, *recommended*. |
| [v6-Finch-3B-HF-Q5_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q5_K_L.gguf) | Q5_K_L | 2.38GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [v6-Finch-3B-HF-Q5_K_M.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q5_K_M.gguf) | Q5_K_M | 2.28GB | false | High quality, *recommended*. |
| [v6-Finch-3B-HF-Q5_K_S.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q5_K_S.gguf) | Q5_K_S | 2.28GB | false | High quality, *recommended*. |
| [v6-Finch-3B-HF-Q4_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q4_K_L.gguf) | Q4_K_L | 2.04GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [v6-Finch-3B-HF-Q4_K_M.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q4_K_M.gguf) | Q4_K_M | 1.92GB | false | Good quality, default size for must use cases, *recommended*. |
| [v6-Finch-3B-HF-Q4_K_S.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q4_K_S.gguf) | Q4_K_S | 1.92GB | false | Slightly lower quality with more space savings, *recommended*. |
| [v6-Finch-3B-HF-Q4_0_8_8.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q4_0_8_8.gguf) | Q4_0_8_8 | 1.92GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). |
| [v6-Finch-3B-HF-Q4_0_4_8.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q4_0_4_8.gguf) | Q4_0_4_8 | 1.92GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). |
| [v6-Finch-3B-HF-Q4_0_4_4.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q4_0_4_4.gguf) | Q4_0_4_4 | 1.92GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. |
| [v6-Finch-3B-HF-Q4_0.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q4_0.gguf) | Q4_0 | 1.92GB | false | Legacy format, generally not worth using over similarly sized formats |
| [v6-Finch-3B-HF-IQ4_XS.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-IQ4_XS.gguf) | IQ4_XS | 1.83GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [v6-Finch-3B-HF-Q3_K_XL.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q3_K_XL.gguf) | Q3_K_XL | 1.68GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [v6-Finch-3B-HF-Q3_K_L.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-Q3_K_L.gguf) | Q3_K_L | 1.53GB | false | Lower quality but usable, good for low RAM availability. |
| [v6-Finch-3B-HF-IQ3_M.gguf](https://huggingface.co/bartowski/v6-Finch-3B-HF-GGUF/blob/main/v6-Finch-3B-HF-IQ3_M.gguf) | IQ3_M | 1.53GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/v6-Finch-3B-HF-GGUF --include "v6-Finch-3B-HF-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/v6-Finch-3B-HF-GGUF --include "v6-Finch-3B-HF-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (v6-Finch-3B-HF-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
banksy235/XCoder-Complexity-Scorer
|
banksy235
| 2024-09-14T09:30:43Z | 10 | 1 | null |
[
"safetensors",
"llama",
"llama-factory",
"full",
"generated_from_trainer",
"license:other",
"region:us"
] | null | 2024-09-14T06:36:52Z |
---
license: other
base_model: Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: code_complexity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code_complexity
This model is a fine-tuned version of [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Sitges/Llama-3.1.8B-bnb-4bit-wenyanwen
|
Sitges
| 2024-09-14T09:27:48Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-14T08:45:57Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Sitges
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aashish1904/L3.1-Celestial-Stone-2x8B-Q4_K_M-GGUF
|
aashish1904
| 2024-09-14T09:05:10Z | 5 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"merge",
"llama3",
"mixtral",
"llama-cpp",
"gguf-my-repo",
"base_model:v000000/L3.1-Celestial-Stone-2x8B",
"base_model:quantized:v000000/L3.1-Celestial-Stone-2x8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-14T09:04:33Z |
---
base_model: v000000/L3.1-Celestial-Stone-2x8B
library_name: transformers
tags:
- llama
- merge
- llama3
- mixtral
- llama-cpp
- gguf-my-repo
---
# aashish1904/L3.1-Celestial-Stone-2x8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`v000000/L3.1-Celestial-Stone-2x8B`](https://huggingface.co/v000000/L3.1-Celestial-Stone-2x8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/v000000/L3.1-Celestial-Stone-2x8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aashish1904/L3.1-Celestial-Stone-2x8B-Q4_K_M-GGUF --hf-file l3.1-celestial-stone-2x8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aashish1904/L3.1-Celestial-Stone-2x8B-Q4_K_M-GGUF --hf-file l3.1-celestial-stone-2x8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aashish1904/L3.1-Celestial-Stone-2x8B-Q4_K_M-GGUF --hf-file l3.1-celestial-stone-2x8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aashish1904/L3.1-Celestial-Stone-2x8B-Q4_K_M-GGUF --hf-file l3.1-celestial-stone-2x8b-q4_k_m.gguf -c 2048
```
|
RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf
|
RichardErkhov
| 2024-09-14T09:02:33Z | 32 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-14T04:19:29Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Matter-0.1-7B-boost - GGUF
- Model creator: https://huggingface.co/0-hero/
- Original model: https://huggingface.co/0-hero/Matter-0.1-7B-boost/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Matter-0.1-7B-boost.Q2_K.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q2_K.gguf) | Q2_K | 2.53GB |
| [Matter-0.1-7B-boost.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Matter-0.1-7B-boost.IQ3_S.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Matter-0.1-7B-boost.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Matter-0.1-7B-boost.IQ3_M.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Matter-0.1-7B-boost.Q3_K.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q3_K.gguf) | Q3_K | 3.28GB |
| [Matter-0.1-7B-boost.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Matter-0.1-7B-boost.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Matter-0.1-7B-boost.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Matter-0.1-7B-boost.Q4_0.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Matter-0.1-7B-boost.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Matter-0.1-7B-boost.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Matter-0.1-7B-boost.Q4_K.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q4_K.gguf) | Q4_K | 4.07GB |
| [Matter-0.1-7B-boost.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Matter-0.1-7B-boost.Q4_1.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Matter-0.1-7B-boost.Q5_0.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Matter-0.1-7B-boost.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Matter-0.1-7B-boost.Q5_K.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q5_K.gguf) | Q5_K | 4.78GB |
| [Matter-0.1-7B-boost.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Matter-0.1-7B-boost.Q5_1.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Matter-0.1-7B-boost.Q6_K.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q6_K.gguf) | Q6_K | 5.53GB |
| [Matter-0.1-7B-boost.Q8_0.gguf](https://huggingface.co/RichardErkhov/0-hero_-_Matter-0.1-7B-boost-gguf/blob/main/Matter-0.1-7B-boost.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1
language:
- en
---
## Matter 7B - 0.1 Boost (Mistral 7B Finetune)
Matter 7B is full-finetune on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens
Boost version is finetuned on some additional data
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
MiuN2k3/vp-infoxlm-base-dsc
|
MiuN2k3
| 2024-09-14T08:46:55Z | 163 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/infoxlm-base",
"base_model:finetune:microsoft/infoxlm-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-14T08:06:18Z |
---
library_name: transformers
base_model: microsoft/infoxlm-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: vp-infoxlm-base-dsc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vp-infoxlm-base-dsc
This model is a fine-tuned version of [microsoft/infoxlm-base](https://huggingface.co/microsoft/infoxlm-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4642
- Accuracy: 0.8251
- F1: 0.8249
- Precision: 0.8259
- Recall: 0.8251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9971 | 1.0 | 1590 | 0.8708 | 0.5664 | 0.5565 | 0.6042 | 0.5664 |
| 0.7175 | 2.0 | 3180 | 0.5943 | 0.7631 | 0.7626 | 0.7713 | 0.7631 |
| 0.5942 | 3.0 | 4770 | 0.5007 | 0.8069 | 0.8069 | 0.8075 | 0.8069 |
| 0.4981 | 4.0 | 6360 | 0.4676 | 0.8188 | 0.8182 | 0.8218 | 0.8188 |
| 0.4669 | 5.0 | 7950 | 0.4642 | 0.8251 | 0.8249 | 0.8259 | 0.8251 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-6.9b-deduped-int4-step98000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T08:42:25Z | 74 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T08:40:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mikeendale/Customer-Service2
|
mikeendale
| 2024-09-14T08:23:28Z | 218 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T08:23:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Arc53_-_docsgpt-7b-mistral-4bits
|
RichardErkhov
| 2024-09-14T08:18:25Z | 6 | 0 | null |
[
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-09-14T08:15:50Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
docsgpt-7b-mistral - bnb 4bits
- Model creator: https://huggingface.co/Arc53/
- Original model: https://huggingface.co/Arc53/docsgpt-7b-mistral/
Original model description:
---
license: apache-2.0
tags:
- rag
- closed-qa
- context
- mistral
---
DocsGPT is optimized for Documentation (RAG optimised): Specifically fine-tuned for providing answers that are based on context, making it particularly useful for developers and technical support teams.
We used the Lora fine tuning process.
This model is fine tuned on top of zephyr-7b-beta
It's an apache-2.0 license so you can use it for commercial purposes too.
Benchmarks:
Bacon:
The BACON test is an internal assessment designed to evaluate the capabilities of neural networks in handling questions with substantial content. It focuses on testing the model's understanding of context-driven queries, as well as its tendency for hallucination and attention span. The questions in both parts are carefully crafted, drawing from diverse sources such as scientific papers, complex code problems, and instructional prompts, providing a comprehensive test of the model's ability to process and generate information in various domains.
| Model | Score |
|------------------------------|-------|
| gpt-4 | 8.74 |
| DocsGPT-7b-Mistral | 8.64 |
| gpt-3.5-turbo | 8.42 |
| zephyr-7b-beta | 8.37 |
| neural-chat-7b-v3-1 | 7.88 |
| Mistral-7B-Instruct-v0.1 | 7.44 |
| openinstruct-mistral-7b | 5.86 |
| llama-2-13b | 2.29 |


MTbench with llm judge:

########## First turn ##########
| Model | Turn | Score |
|-----------------------|------|----------|
| gpt-4 | 1 | 8.956250 |
| gpt-3.5-turbo | 1 | 8.075000 |
| DocsGPT-7b-Mistral | 1 | 7.593750 |
| zephyr-7b-beta | 1 | 7.412500 |
| vicuna-13b-v1.3 | 1 | 6.812500 |
| alpaca-13b | 1 | 4.975000 |
| deepseek-coder-6.7b | 1 | 4.506329 |
########## Second turn ##########
| Model | Turn | Score |
|-----------------------|------|----------|
| gpt-4 | 2 | 9.025000 |
| gpt-3.5-turbo | 2 | 7.812500 |
| DocsGPT-7b-Mistral | 2 | 6.740000 |
| zephyr-7b-beta | 2 | 6.650000 |
| vicuna-13b-v1.3 | 2 | 5.962500 |
| deepseek-coder-6.7b | 2 | 5.025641 |
| alpaca-13b | 2 | 4.087500 |
########## Average ##########
| Model | Score |
|-----------------------|----------|
| gpt-4 | 8.990625 |
| gpt-3.5-turbo | 7.943750 |
| DocsGPT-7b-Mistral | 7.166875 |
| zephyr-7b-beta | 7.031250 |
| vicuna-13b-v1.3 | 6.387500 |
| deepseek-coder-6.7b | 4.764331 |
| alpaca-13b | 4.531250 |
To prepare your prompts make sure you keep this format:
```
### Instruction
(where the question goes)
### Context
(your document retrieval + system instructions)
### Answer
```
|
John6666/3d-cute-character-sdxl-v10-sdxl
|
John6666
| 2024-09-14T08:16:44Z | 278 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"3D",
"cute",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-14T08:12:22Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- 3D
- cute
---
Original model is [here](https://civitai.com/models/747975/3d-cute-character-sdxl?modelVersionId=836458).
This model created by [lybing315](https://civitai.com/user/lybing315).
|
OuteAI/Lite-Oute-2-Mamba2Attn-250M-Instruct
|
OuteAI
| 2024-09-14T08:16:38Z | 15 | 19 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-08-24T09:12:14Z |
---
license: apache-2.0
---
<style>
table {
border-collapse: collapse;
width: 100%;
margin-bottom: 20px;
}
th, td {
border: 1px solid #ddd;
padding: 8px;
text-align: center;
}
.best {
font-weight: bold;
text-decoration: underline;
}
</style>
<div style="text-align: center; margin: 20px auto; padding: 20px; border: 3px solid #ddd; border-radius: 10px;">
<h2 style="margin-bottom: 4px; margin-top: 0px;">OuteAI</h2>
<a href="https://www.outeai.com/" target="_blank" style="margin-right: 10px;">🌎 OuteAI.com</a>
<a href="https://discord.gg/vyBM87kAmf" target="_blank" style="margin-right: 10px;">🤝 Join our Discord</a>
<a href="https://x.com/OuteAI" target="_blank">𝕏 @OuteAI</a>
</div>
## Introduction
We're excited to introduce our latest model, the Lite Oute 2 Mamba2Attn 250M. <br>
This is our third generation model featuring the new Mamba2 architecture with attention layers. <br>
If you're interested in more technical details that covers the training process, architecture, and performance: <a href="https://outeai.com/blog/lite-oute-2-mamba2attn" target="_blank">Read the full blog post here</a>
## Model Variants
- [Lite-Oute-2-Mamba2Attn-250M-Instruct](https://huggingface.co/OuteAI/Lite-Oute-2-Mamba2Attn-250M-Instruct)
- [Lite-Oute-2-Mamba2Attn-250M-Base](https://huggingface.co/OuteAI/Lite-Oute-2-Mamba2Attn-250M-Base)
## Training Details
The model was pre-trained on 30 billion tokens using a balanced mixture of datasets:
- **50% dclm-baseline-1.0**
- **50% fineweb-edu**
Base model training was conducted on single NVIDIA 4090 and NVIDIA H100 GPUs, with the following key parameters:
- **Max learning rate:** 4e-4
- **Min learning rate:** 1e-4
- **Block size:** 4096
- **Token batches:** ~100k tokens
For instruction training, we first trained the model with Supervised Fine-tuning (SFT) then further refined the model using DPO training.
## Benchmark Results
<table>
<tr>
<th>Benchmark</th>
<th>Lite-Oute-2-Mamba2Attn-250M-Instruct</th>
<th>Lite-Oute-1-300M-Instruct</th>
<th>Lite-Mistral-150M-v2-Instruct</th>
</tr>
<tr>
<td>ARC-C (0-shot)</td>
<td class="best">26.71</td>
<td>26.02</td>
<td>-</td>
</tr>
<tr>
<td>ARC-E (0-shot)</td>
<td class="best">53.70</td>
<td>49.79</td>
<td>39.52</td>
</tr>
<tr>
<td>HellaSWAG (0-shot)</td>
<td class="best">38.19</td>
<td>34.50</td>
<td>31.01</td>
</tr>
<tr>
<td>MMLU (0-shot)</td>
<td class="best">25.13</td>
<td>24.00</td>
<td>25.28</td>
</tr>
<tr>
<td>OpenBookQA (0-shot)</td>
<td class="best">32.20</td>
<td>32.20</td>
<td>28.40</td>
</tr>
<tr>
<td>PIQA (0-shot)</td>
<td class="best">66.59</td>
<td>65.40</td>
<td>60.23</td>
</tr>
<tr>
<td>Winogrande (0-shot)</td>
<td>53.28</td>
<td class="best">53.75</td>
<td>51.78</td>
</tr>
<tr>
<td>ARC-C (5-shot)</td>
<td class="best">27.82</td>
<td>26.37</td>
<td>-</td>
</tr>
<tr>
<td>ARC-E (5-shot)</td>
<td class="best">57.28</td>
<td>51.43</td>
<td>46.30</td>
</tr>
<tr>
<td>HellaSWAG (5-shot)</td>
<td class="best">37.71</td>
<td>34.93</td>
<td>30.73</td>
</tr>
<tr>
<td>MMLU (5-shot)</td>
<td>25.81</td>
<td class="best">25.87</td>
<td>-</td>
</tr>
<tr>
<td>OpenBookQA (5-shot)</td>
<td class="best">31.80</td>
<td>31.40</td>
<td>28.00</td>
</tr>
<tr>
<td>PIQA (5-shot)</td>
<td class="best">66.38</td>
<td>65.07</td>
<td>62.68</td>
</tr>
<tr>
<td>Winogrande (5-shot)</td>
<td>51.14</td>
<td class="best">52.01</td>
<td>51.30</td>
</tr>
</table>
## Interfacing with the Instruct Model
> [!IMPORTANT]
> To ensure optimal performance, please use the following template when interacting with the model:
```
<s>system\n
{System message}</s>
<s>user\n
{Your question or message}</s>
<s>assistant\n
{The model's response}</s>
```
> [!IMPORTANT]
> When prompting the model for a response, your input should be structured as follows:
```
<s>system\n
{System message}</s>
<s>user\n
{Your question or message}</s>
<s>assistant\n
```
This format ensures the best performance, as the model was fine-tuned using this specific template.
### Recommended Settings
- **Temperature:** 0.1 - 0.4
- **Repetition Penalty:** 1.1 - 1.14
If you're using a Hugging Face tokenizer, make sure to use `tokenizer.apply_chat_template` or adjust your interface to match the template above.
## Let's look at some real world examples
```
<s>user
What are three key benefits of electric vehicles compared to traditional gasoline-powered cars?</s>
<s>assistant
Electric vehicles have several advantages over gasoline-powered cars. Here are three key benefits:
1. Efficient: Electric vehicles use less fuel and emit fewer emissions than gasoline-powered cars, which can save consumers money on their monthly bills.
2. Fuel economy: Electric vehicles produce more power per mile driven, which means they consume less fuel overall. This helps reduce the need for refueling stations and reduces carbon emissions from transportation.
3. Flexibility: Electric vehicles can be adapted to different driving conditions, such as snowy or rainy days, and can accommodate different types of terrain. This makes them easier to operate in urban areas with limited parking spaces.
Overall, electric vehicles offer a range of benefits that make them an attractive choice for many drivers. They provide a cost-effective alternative to gasoline-powered cars, reducing the need for frequent refueling stations, and making it easier to drive long distances without having to worry about running out of gas.</s>
```
```
<s>user
Who wrote the play "Romeo and Juliet"?</s>
<s>assistant
The play "Romeo and Juliet" by William Shakespeare is a classic example of a tragic love story. It tells the story of Romeo and Juliet, a young couple who fall in love with each other but ultimately lose their lives due to their respective flaws. The play was written during the Renaissance period and it has been performed around the world since then.</s>
```
## Usage with HuggingFace transformers
Model weights were converted from the original Mamba2 implementation to be Hugging Face compatible. <br>
Due to the lack of official support for Mamba2 attention layers in Hugging Face Transformers, custom modeling files are included. <br>
The implementation of Mamba2 with attention in the modeling files comes from Pull Request #32027 in the Hugging Face Transformers repository: [https://github.com/huggingface/transformers/pull/32027](https://github.com/huggingface/transformers/pull/32027)
To speed up inference, we recommend installing mamba-ssm and flash attention 2.
mamba-ssm:
```bash
pip install causal-conv1d>=1.4.0
pip install mamba-ssm
```
flash attention 2:
```bash
pip install flash-attn --no-build-isolation
```
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained(
"OuteAI/Lite-Oute-2-Mamba2Attn-Instruct",
# To allow custom modeling files
trust_remote_code=True,
# If you have installed flash attention 2
# attn_implementation="flash_attention_2",
# torch_dtype=torch.bfloat16,
)
model.to(device)
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Oute-2-Mamba2Attn-Instruct")
def generate_response(message: str, temperature: float = 0.1, repetition_penalty: float = 1.12) -> str:
# Apply the chat template and convert to PyTorch tensors
messages = [
{"role": "system", "content": "You are an AI assistant. You will be given a task. You must generate a detailed answer."},
{"role": "user", "content": message}
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(device)
# Generate the response
output = model.generate(
input_ids,
max_length=256,
temperature=temperature,
repetition_penalty=repetition_penalty,
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
message = "Who wrote the play Romeo and Juliet?"
response = generate_response(message)
print(response)
```
## Fine-Tuning for Specific Tasks:
For optimal task-specific performance, it is recommended to fine-tune the model on datasets relevant to the intended use case. <br>
Fine-tuning can significantly improve the model's accuracy, relevance, and depth of knowledge in specialized domains or for particular types of tasks. <br>
This process allows the model to adapt its pre-trained knowledge to the nuances and requirements of specific applications, potentially mitigating some of the weaknesses observed in general use.
## Conclusion
The Lite-Oute-2-Mamba2Attn-250M-Instruct model shows significant improvements over previous versions, particularly in zero-shot and few-shot learning tasks. <br>
Despite its smaller size, it outperforms older models in most benchmarks, demonstrating better common sense reasoning, language understanding, and general knowledge. <br>
While it can provide coherent responses to various queries, it has limitations due to its small size (250 million parameters). <br>
Users should be aware that it may produce incorrect outputs and should not be relied upon for critical decisions without verification. <br>
It's best suited for lightweight applications or as a basis for further development.
## Disclaimer
By using this model, you acknowledge that you understand and assume the risks associated with its use.
You are solely responsible for ensuring compliance with all applicable laws and regulations.
We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages.
We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.
|
John6666/yuta-ai-v4-sdxl
|
John6666
| 2024-09-14T08:08:39Z | 85 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-14T07:58:59Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/738466/yuta-ai-yuta-ai?modelVersionId=840793).
This model created by [Kokkoria](https://civitai.com/user/Kokkoria).
|
John6666/uncanny-valley-v3-toon-sdxl
|
John6666
| 2024-09-14T08:04:18Z | 164 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"toon",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-14T07:58:19Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- toon
- pony
---
Original model is [here](https://civitai.com/models/507472?modelVersionId=840999).
This model created by [meden](https://civitai.com/user/meden).
|
Xu-Ouyang/pythia-6.9b-deduped-int3-step98000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T08:00:50Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T07:54:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DrawingProcess/dreambooth-sticker-yellowmonster-stkyelmon-v2
|
DrawingProcess
| 2024-09-14T07:59:46Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-09-14T07:41:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dreambooth_sticker_yellowmonster_stkyelmon_v2 Dreambooth model trained by DrawingProcess with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:

|
amc-madalin/amc-en-it
|
amc-madalin
| 2024-09-14T07:42:54Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-01-20T10:07:09Z |
---
license: mit
---
# Transformer Model for Language Translation
## Overview
This project implements a Transformer model for language translation between English and Italian. Built from scratch, it aims to provide a deeper understanding of the Transformer architecture, which has become a cornerstone in natural language processing tasks. The project explores key elements of the architecture, such as the attention mechanism, and demonstrates hands-on experience with data preprocessing, model training, and evaluation.
## Learning Objectives
- Understand and implement the Transformer model architecture.
- Explore the attention mechanism and its application in language translation.
- Gain practical experience with data preprocessing, model training, and evaluation in NLP.
## Model Card on Hugging Face
You can find and use the pre-trained model on Hugging Face here:
[Model on Hugging Face](https://huggingface.co/amc-madalin/amc-en-it/tree/main)
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("your-huggingface-model-url")
model = AutoModelForSeq2SeqLM.from_pretrained("your-huggingface-model-url")
# Translation Example
text = "Hello, how are you?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(translated_text)
```
## Project Structure
- **Attention Visualization** (`attention_visual.ipynb`): A notebook for visualizing attention maps to understand how the model focuses on different sentence parts during translation.
- **Configuration Settings** (`config.py`): Includes hyperparameters and other modifiable settings.
- **Dataset Processing** (`dataset.py`): Handles loading and preprocessing of English and Italian datasets.
- **Model Architecture** (`model.py`): Defines the Transformer model architecture.
- **Project Documentation** (`README.md`): This file, which provides a complete overview of the project.
- **Experiment Logs** (`runs/`): Logs and outputs from model training sessions.
- **Tokenizers** (`tokenizer_en.json`, `tokenizer_it.json`): Tokenizers for English and Italian text preprocessing.
- **Training Script** (`train.py`): The script that encapsulates the training process.
- **Saved Model Weights** (`weights/`): Stores the trained model weights for future use.
## Installation
To set up and run the project locally, follow these steps:
1. **Clone the Repository:**
```bash
git clone https://github.com/amc-madalin/transformer-for-language-translation.git
```
2. **Create a Python Environment:**
Create a Conda environment:
```bash
conda create --name transformer python=3.x
```
Replace `3.x` with your preferred Python version.
3. **Activate the Environment:**
```bash
conda activate transformer
```
4. **Install Dependencies:**
Install required packages from `requirements.txt`:
```bash
pip install -r requirements.txt
```
5. **Prepare Data:**
The dataset will be automatically downloaded. Modify the source (`lang_src`) and target (`lang_tgt`) languages in `config.py`, if necessary. The default is set to English (`en`) and Italian (`it`):
```json
"lang_src": "en",
"lang_tgt": "it",
```
6. **Train the Model:**
Start the training process with:
```bash
python train.py
```
7. **Use the Model:**
The trained model weights will be saved in the `weights/` directory. Use these weights for inference, evaluation, or further applications.
## Using the Model with Hugging Face
Once trained, the model can be uploaded to Hugging Face for easy access and use.
### Uploading the Model to Hugging Face
Use the following steps to upload your trained model to Hugging Face:
```bash
huggingface-cli login
transformers-cli upload ./weights/ --organization your-organization
```
### Loading the Model from Hugging Face for Inference
You can easily load the model for translation tasks directly from Hugging Face:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("your-huggingface-model-url")
model = AutoModelForSeq2SeqLM.from_pretrained("your-huggingface-model-url")
# Translate text
text = "How are you?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
translation = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(translation)
```
## Learning Resources
- [YouTube - Coding a Transformer from Scratch on PyTorch](https://youtube.com/your-video-link)
A detailed walkthrough of coding a Transformer model from scratch using PyTorch, including training and inference.
## Acknowledgements
Special thanks to **Umar Jamil** for his guidance and contributions that supported the completion of this project.
|
poteminr/jailbreak_detector_v2
|
poteminr
| 2024-09-14T07:27:28Z | 408 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:protectai/deberta-v3-base-prompt-injection-v2",
"base_model:finetune:protectai/deberta-v3-base-prompt-injection-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-13T16:07:56Z |
---
library_name: transformers
license: apache-2.0
base_model: protectai/deberta-v3-base-prompt-injection-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: jailbreak_detector_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jailbreak_detector_v2
This model is a fine-tuned version of [protectai/deberta-v3-base-prompt-injection-v2](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3056
- Accuracy: 0.8642
- F1: 0.8523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 182 | 0.3056 | 0.8642 | 0.8523 |
| No log | 2.0 | 364 | 0.3350 | 0.8889 | 0.8824 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
byroneverson/gemma-2-27b-it-abliterated
|
byroneverson
| 2024-09-14T07:20:37Z | 5,292 | 17 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"gemma",
"gemma-2",
"chat",
"it",
"abliterated",
"conversational",
"en",
"base_model:google/gemma-2-27b-it",
"base_model:finetune:google/gemma-2-27b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-28T11:38:27Z |
---
base_model: google/gemma-2-27b-it
pipeline_tag: text-generation
license: gemma
language:
- en
tags:
- gemma
- gemma-2
- chat
- it
- abliterated
library_name: transformers
---
# gemma-2-27b-it-abliterated
## Now accepting abliteration requests. If you would like to see a model abliterated, follow me and leave me a message with model link.
This is a new approach for abliterating models using CPU only. I was able to abliterate this model using free kaggle processing with no accelerator.
1. Obtain refusal direction vector using a quant model with llama.cpp (llama-cpp-python and ggml-python).
2. Orthogonalize each .safetensors files directly from original repo and upload to a new repo. (one at a time)
Check out the <a href="https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated/blob/main/abliterate-gemma-2-27b-it.ipynb">jupyter notebook</a> for details of how this model was abliterated from gemma-2-27b-it.

|
Jesssiemin/sd-class-butterflies-32
|
Jesssiemin
| 2024-09-14T07:18:34Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-09-14T07:18:08Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Jesssiemin/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Nishantc05/qa-gptmodel
|
Nishantc05
| 2024-09-14T07:17:57Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T07:14:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aashish1904/Theia-21B-v2-Q4_K_M-GGUF
|
aashish1904
| 2024-09-14T07:13:32Z | 6 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:TheDrummer/Theia-21B-v2",
"base_model:quantized:TheDrummer/Theia-21B-v2",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-14T07:12:38Z |
---
base_model: TheDrummer/Theia-21B-v2
license: other
tags:
- llama-cpp
- gguf-my-repo
---
# aashish1904/Theia-21B-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`TheDrummer/Theia-21B-v2`](https://huggingface.co/TheDrummer/Theia-21B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheDrummer/Theia-21B-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aashish1904/Theia-21B-v2-Q4_K_M-GGUF --hf-file theia-21b-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aashish1904/Theia-21B-v2-Q4_K_M-GGUF --hf-file theia-21b-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aashish1904/Theia-21B-v2-Q4_K_M-GGUF --hf-file theia-21b-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aashish1904/Theia-21B-v2-Q4_K_M-GGUF --hf-file theia-21b-v2-q4_k_m.gguf -c 2048
```
|
QuantFactory/Buddy-2B-v1-GGUF
|
QuantFactory
| 2024-09-14T06:56:07Z | 38 | 2 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-14T06:42:45Z |
---
license: cc-by-nc-4.0
---
[](https://hf.co/QuantFactory)
# QuantFactory/Buddy-2B-v1-GGUF
This is quantized version of [TheDrummer/Buddy-2B-v1](https://huggingface.co/TheDrummer/Buddy-2B-v1) created using llama.cpp
# Original Model Card
I'm serious about the license for this one. NON-COMMERCIAL. Ask permission.
# Buddy 2B v1
# Links
- Original: https://huggingface.co/TheDrummer/Buddy-2B-v1
- GGUF: https://huggingface.co/TheDrummer/Buddy-2B-v1-GGUF
- iMatrix: https://huggingface.co/MarsupialAI/Buddy-2B-v1_iMatrix_GGUF
- EXL2: https://huggingface.co/MarsupialAI/Buddy-2B-v1_EXL2
# Disclaimer
Please note that Buddy is not a licensed therapist and should not be relied upon for addressing serious mental health concerns such as depression, trauma, or suicidal thoughts. If you are facing these issues, it is important to seek professional help from a qualified healthcare provider.
# Description
Buddy is designed as an empathetic language model, aimed at fostering introspection, self-reflection, and personal growth through thoughtful conversation. Buddy won't judge and it won't dismiss your concerns. Get some self-care with Buddy.
# Usage
- Chat Format: Gemma or Text Completion
- Do not hold back on Buddy.
- Open up to Buddy.
- Pour your heart out to Buddy.
- Listen to Buddy.
- Remember that Buddy is just an AI.
- You will be assigned a name if you do not provide one.
- You can call Buddy anything (e.g., "Hi Alex!").
- Buddy isn't perfect, feel free to regenerate responses until one suits you.
- You can RP with this:

|
huazi123/Qwen-Qwen1.5-1.8B-1726295897
|
huazi123
| 2024-09-14T06:38:31Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-09-14T06:38:17Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
jcyriac/flux_lora_text3
|
jcyriac
| 2024-09-14T06:36:09Z | 27 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-14T06:35:53Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: '[trigger] holding a sign that says ''I LOVE PROMPTS!'''
output:
url: samples/1726295628435__000002000_0.jpg
- text: '[trigger] santa clause holding a bottle of face serum. bottle label contains
text "Salicylic Acid 02% FACE SERUM with oligopeptide-10 egcg + horehound
extract reduces oil, acne, blackheads for acne prone skin 30m1/1floz Minimalist"'
output:
url: samples/1726295641547__000002000_1.jpg
- text: '[trigger] A woman holding news papper, sitting at a cafe. Newspaper contains
text "Earnings from export of basmati had fallen sharply due to trade restrictions,
while onion growers and traders had demanded removal of export barriers. The
Centre on Friday scrapped the minimum export prices (MEPs) on basmati and
onion it had imposed last year to cool domestic food inflation, according
to separate notifications, in steps that will cheer farmers and traders."
This is in small text, but readable'
output:
url: samples/1726295654675__000002000_2.jpg
- text: "[trigger] A IT service center advertisement that contains text \"Jeswin\
\ IT services. Reliable Solutions for All Your Tech Troubles! \u2022 Virus\
\ removal and system security \u2022 Hardware repairs and upgrades \u2022\
\ Data recovery and backup solutions\""
output:
url: samples/1726295667800__000002000_3.jpg
- text: '[trigger] A film ad that contains the heros and heroins image and the text
"From visionary director James Cameron comes an epic adventure that will push
the boundaries of cinema... AQUATERRA Starring: Chris Hemsworth"'
output:
url: samples/1726295680995__000002000_4.jpg
- text: '[trigger] A circus company ad that contains the text "Prepare to be amazed,
astounded, and enchanted by the wonders of Cirque Magnifique!"'
output:
url: samples/1726295694196__000002000_5.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: p3r5on
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# flux_lora_text3
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `p3r5on` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/jcyriac/flux_lora_text3/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('jcyriac/flux_lora_text3', weight_name='flux_lora_text3')
image = pipeline('[trigger] holding a sign that says 'I LOVE PROMPTS!'').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
abiks/Nemotron-Mini-4B-Instruct-GGUF-Q8
|
abiks
| 2024-09-14T06:00:08Z | 6 | 3 |
nemo
|
[
"nemo",
"gguf",
"en",
"arxiv:2402.16819",
"arxiv:2407.14679",
"base_model:nvidia/Nemotron-Mini-4B-Instruct",
"base_model:quantized:nvidia/Nemotron-Mini-4B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-14T04:33:54Z |
---
license: other
license_name: nvidia-community-model-license
license_link: >-
https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct/resolve/main/nvidia-community-model-license-aug2024.pdf
language:
- en
base_model:
- nvidia/Nemotron-Mini-4B-Instruct
library_name: nemo
---
# Nemotron-Mini-4B-Instruct-GGUF Q8
This quantized GGUF model was created using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a>
Original model: https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct
You can run this model on [LM Studio](https://lmstudio.ai/)
## Model Overview
Nemotron-Mini-4B-Instruct is a model for generating responses for roleplaying, retrieval augmented generation, and function calling. It is a small language model (SLM) optimized through distillation, pruning and quantization for speed and on-device deployment. It is a fine-tuned version of [nvidia/Minitron-4B-Base](https://huggingface.co/nvidia/Minitron-4B-Base), which was pruned and distilled from [Nemotron-4 15B](https://arxiv.org/abs/2402.16819) using [our LLM compression technique](https://arxiv.org/abs/2407.14679). This instruct model is optimized for roleplay, RAG QA, and function calling in English. It supports a context length of 4,096 tokens. This model is ready for commercial use.
**Model Developer:** NVIDIA
**Model Dates:** Nemotron-Mini-4B-Instruct was trained between February 2024 and Aug 2024.
## License
[NVIDIA Community Model License](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct/blob/main/nvidia-community-model-license-aug2024.pdf)
## Model Architecture
Nemotron-Mini-4B-Instruct uses a model embedding size of 3072, 32 attention heads, and an MLP intermediate dimension of 9216. It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
**Architecture Type:** Transformer Decoder (auto-regressive language model)
**Network Architecture:** Nemotron-4
## Prompt Format:
We recommend using the following prompt template, which was used to fine-tune the model. The model may not perform optimally without it.
**Single Turn**
```
<extra_id_0>System
{system prompt}
<extra_id_1>User
{prompt}
<extra_id_1>Assistant\n
```
**Tool use**
```
<extra_id_0>System
{system prompt}
<tool> ... </tool>
<context> ... </context>
<extra_id_1>User
{prompt}
<extra_id_1>Assistant
<toolcall> ... </toolcall>
<extra_id_1>Tool
{tool response}
<extra_id_1>Assistant\n
```
|
Meshwa/OLMoE-1b-7b-0924-Instruct-gguf
|
Meshwa
| 2024-09-14T05:59:53Z | 33 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-13T13:55:52Z |
---
license: apache-2.0
---
|
Xu-Ouyang/pythia-6.9b-deduped-int3-step95000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T05:58:57Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T05:57:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/arco-reasoner-v1.3
|
appvoid
| 2024-09-14T05:53:14Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:appvoid/arco-interpreter-v1.2",
"base_model:finetune:appvoid/arco-interpreter-v1.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T05:53:02Z |
---
base_model: appvoid/arco-interpreter-v1.2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** appvoid
- **License:** apache-2.0
- **Finetuned from model :** appvoid/arco-interpreter-v1.2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ruqiya/gemma2_2b_fine_tuned_arabic_dataset
|
Ruqiya
| 2024-09-14T05:51:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"ar",
"en",
"dataset:arbml/CIDAR",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-12T14:31:10Z |
---
datasets:
- arbml/CIDAR
language:
- ar
- en
library_name: transformers
---
# Gemma2_2b_fine_tuned_arabic_dataset
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on [arbml/CIDAR](https://huggingface.co/datasets/arbml/CIDAR) Arabic dataset.
[Notebook](https://bit.ly/Ar-Gemma)
|
StockLlama/StockLlama-tuned-GOOG-2023-01-01_2024-09-14
|
StockLlama
| 2024-09-14T05:40:32Z | 33 | 0 |
transformers
|
[
"transformers",
"joblib",
"safetensors",
"stockllama",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-09-14T05:40:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Veekee/en_crochet_ner
|
Veekee
| 2024-09-14T05:13:42Z | 3 | 1 |
spacy
|
[
"spacy",
"token-classification",
"en",
"region:us"
] |
token-classification
| 2024-03-08T06:31:07Z |
---
language:
- en
tags:
- spacy
- token-classification
pipeline_tag: token-classification
widget:
- text: "sc in second ch from hook, sc in each ch to end, ch, turn."
---
## English Crochet NER
Ner model that classifies crochet words, using the entities: measurement, stitch, and tool. Trained on crochet textbooks.
|
SicariusSicariiStuff/nvidia_Nemotron-Mini-4B-Instruct_FP8
|
SicariusSicariiStuff
| 2024-09-14T05:11:11Z | 6 | 0 | null |
[
"safetensors",
"nemotron",
"license:apache-2.0",
"region:us"
] | null | 2024-09-14T05:01:32Z |
---
license: apache-2.0
---
|
QuantFactory/FireStorm-Llama-3.1-8B-GGUF
|
QuantFactory
| 2024-09-14T04:59:19Z | 64 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"arxiv:2406.06623",
"arxiv:2311.07911",
"arxiv:2311.12022",
"arxiv:2406.01574",
"arxiv:1803.05457",
"arxiv:2310.16049",
"arxiv:2210.09261",
"arxiv:2109.07958",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-14T03:27:46Z |
---
base_model: unsloth/llama-3.1-storm-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
[](https://hf.co/QuantFactory)
# QuantFactory/FireStorm-Llama-3.1-8B-GGUF
This is quantized version of [EpistemeAI2/FireStorm-Llama-3.1-8B](https://huggingface.co/EpistemeAI2/FireStorm-Llama-3.1-8B) created using llama.cpp
# Original Model Card
## SFT fine tuning method:
Special fine tuned with PHD level and COT dataset to Storm COT system.
## Original Model card
## Llama 3.1 Storm

Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
**🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
**🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
## TL;DR

We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
## 🏆 Introducing Llama-3.1-Storm-8B
[**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
## Llama-3.1-Storm-8B Model Strengths
Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
<table>
<tr>
<td><strong>Model Strength</strong>
</td>
<td><strong>Relevant Benchmarks</strong>
</td>
<tr>
<tr>
<td>🎯 Improved Instruction Following
</td>
<td>IFEval Strict (+3.93%)
</td>
<tr>
<tr>
<td>🌐 Enhanced Knowledge Driven Question Answering
</td>
<td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🧠 Better Reasoning
</td>
<td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🤖 Superior Agentic Capabilities
</td>
<td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
</td>
<tr>
<tr>
<td>🚫 Reduced Hallucinations
</td>
<td>TruthfulQA (+9%)
</td>
<tr>
</table>
**Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
## Llama-3.1-Storm-8B Models
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
---
## 💻 How to Use the Model of EpistemeAI2's FireStorm-Llama-3.1-8B
The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results.
### Installation
```bash
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
```
Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
### Conversational Use-case
#### Use with [🤗 Transformers](https://github.com/huggingface/transformers)
##### Using `transformers.pipeline()` API
```python
import transformers
import torch
model_id = "EpistemeAI2/FireStorm-Llama-3.1-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'}
```
##### Using `model.generate()` API
```bash
pip install flash_attn==2.6.3
```
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
# Apply Llama3.1 chat-template
def format_prompt(user_query):
template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"""
return template.format(user_query)
model_id = 'EpistemeAI2/FireStorm-Llama-3.1-8B'
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=False,
use_flash_attention_2=True
)
# Build final input prompt after applying chat-template
prompt = format_prompt("What is 2+2?")
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response) # Expected Output: '2 + 2 = 4'
```
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "EpistemeAI2/FireStorm-Llama-3.1-8B" # FP8 model: "EpistemeAI2/FireStorm-Llama-3.1-8B"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
```
#### Use with [LitGPT](https://github.com/Lightning-AI/litgpt)
```bash
pip install 'litgpt[all]'
litgpt download EpistemeAI2/FireStorm-Llama-3.1-8B --model_name meta-llama/Meta-Llama-3.1-8B
```
```python
from litgpt import LLM
llm = LLM.load(model="EpistemeAI2/FireStorm-Llama-3.1-8B")
llm.generate("What do Llamas eat?")
```
### Function Calling Use-case
[**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
#### Prompt Format for Function Calling
FireStorm-Llama-3.1-8B is trained with specific system prompt for Function Calling:
```
You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>LIST_OF_TOOLS</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
```
Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
import json
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "EpistemeAI2/FireStorm-Llama-3.1-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
def create_system_prompt(tools_list):
system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>{}</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
# Convert the tools list to a string representation
tools_str = json.dumps(tools_list, ensure_ascii=False)
# Format the system prompt with the tools list
system_prompt = system_prompt_format.format(tools_str)
return system_prompt
# Example tools list
tools_list = [
{
"name": "peers",
"description": "Retrieves a list of company peers given a stock symbol.",
"parameters": {
"symbol": {
"description": "The stock symbol for the company.",
"type": "str",
"default": ""
}
}
},
{
"name": "web_chain_details",
"description": "python",
"parameters": {
"chain_slug": {
"description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
"type": "str",
"default": "ethereum"
}
}
}
]
# Create the system prompt with the tools list
system_prompt = create_system_prompt(tools_list)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
```
## Use llama3.1-storm:8b ##
#### Use with [Ollama](https://ollama.com/)
```
import ollama
tools = [{
'type': 'function',
'function': {
'name': 'get_current_weather',
'description': 'Get the current weather for a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
{
'type': 'function',
'function': {
'name': 'get_places_to_vist',
'description': 'Get places to visit in a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
]
response = ollama.chat(
model='ajindal/llama3.1-storm:8b',
messages=[
{'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
{'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
],
tools=tools
)
print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
```
## Alignment Note
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
## Cite Our Work
```
@misc {ashvini_kumar_jindal_2024,
author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
title = { Llama-3.1-Storm-8B },
year = 2024,
url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
doi = { 10.57967/hf/2902 },
publisher = { Hugging Face }
}
```
# Uploaded model
- **Developed by:** EpistemeAI2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-storm-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pristinawang/adv-ssm-hw1-fullPara-fullData-1726281318
|
pristinawang
| 2024-09-14T04:44:19Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-14T04:44:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jkazdan/collapse_gemma-2-2b_hs2_accumulate_iter2_sftsd1
|
jkazdan
| 2024-09-14T04:37:39Z | 9 | 0 | null |
[
"safetensors",
"gemma2",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2024-09-13T21:51:15Z |
---
license: gemma
base_model: google/gemma-2-2b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: collapse_gemma-2-2b_hs2_accumulate_iter2_sftsd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# collapse_gemma-2-2b_hs2_accumulate_iter2_sftsd1
This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0975
- Num Input Tokens Seen: 13721160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 8
- eval_batch_size: 16
- seed: 1
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.3956 | 0 |
| 1.5421 | 0.0206 | 5 | 1.3563 | 284760 |
| 1.4213 | 0.0412 | 10 | 1.2364 | 571568 |
| 1.3773 | 0.0618 | 15 | 1.1718 | 845064 |
| 1.2116 | 0.0824 | 20 | 1.1443 | 1127704 |
| 1.1315 | 0.1030 | 25 | 1.1199 | 1412496 |
| 1.1024 | 0.1236 | 30 | 1.1226 | 1698920 |
| 1.0443 | 0.1441 | 35 | 1.1252 | 1986472 |
| 1.0363 | 0.1647 | 40 | 1.1266 | 2267632 |
| 1.0423 | 0.1853 | 45 | 1.1341 | 2547936 |
| 0.9706 | 0.2059 | 50 | 1.1300 | 2830576 |
| 0.9604 | 0.2265 | 55 | 1.1429 | 3118224 |
| 0.9255 | 0.2471 | 60 | 1.1355 | 3404464 |
| 0.9483 | 0.2677 | 65 | 1.1537 | 3688352 |
| 0.8534 | 0.2883 | 70 | 1.1419 | 3977080 |
| 0.8731 | 0.3089 | 75 | 1.1393 | 4258200 |
| 0.8774 | 0.3295 | 80 | 1.1458 | 4542712 |
| 0.8021 | 0.3501 | 85 | 1.1396 | 4833248 |
| 0.7919 | 0.3707 | 90 | 1.1405 | 5110392 |
| 0.765 | 0.3912 | 95 | 1.1369 | 5394440 |
| 0.6146 | 0.4118 | 100 | 1.1466 | 5677160 |
| 0.7264 | 0.4324 | 105 | 1.1348 | 5959104 |
| 0.6176 | 0.4530 | 110 | 1.1390 | 6236792 |
| 0.718 | 0.4736 | 115 | 1.1362 | 6522184 |
| 0.6601 | 0.4942 | 120 | 1.1386 | 6805272 |
| 0.7045 | 0.5148 | 125 | 1.1291 | 7080584 |
| 0.6125 | 0.5354 | 130 | 1.1355 | 7359048 |
| 0.7828 | 0.5560 | 135 | 1.1299 | 7639800 |
| 0.7475 | 0.5766 | 140 | 1.1292 | 7925000 |
| 0.7263 | 0.5972 | 145 | 1.1283 | 8212784 |
| 0.591 | 0.6178 | 150 | 1.1274 | 8498984 |
| 0.6697 | 0.6384 | 155 | 1.1224 | 8783480 |
| 0.6356 | 0.6589 | 160 | 1.1216 | 9069640 |
| 0.6016 | 0.6795 | 165 | 1.1205 | 9358968 |
| 0.5734 | 0.7001 | 170 | 1.1175 | 9644264 |
| 0.5932 | 0.7207 | 175 | 1.1157 | 9934824 |
| 0.5129 | 0.7413 | 180 | 1.1148 | 10221456 |
| 0.6567 | 0.7619 | 185 | 1.1130 | 10498184 |
| 0.6554 | 0.7825 | 190 | 1.1117 | 10777688 |
| 0.5459 | 0.8031 | 195 | 1.1105 | 11062480 |
| 0.6166 | 0.8237 | 200 | 1.1069 | 11343448 |
| 0.6983 | 0.8443 | 205 | 1.1061 | 11620888 |
| 0.5964 | 0.8649 | 210 | 1.1052 | 11908944 |
| 0.5881 | 0.8855 | 215 | 1.1031 | 12192472 |
| 0.5667 | 0.9060 | 220 | 1.1026 | 12474256 |
| 0.5131 | 0.9266 | 225 | 1.1018 | 12762728 |
| 0.5854 | 0.9472 | 230 | 1.0999 | 13045696 |
| 0.6179 | 0.9678 | 235 | 1.1003 | 13323080 |
| 0.5287 | 0.9884 | 240 | 1.0984 | 13609776 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jclian91/Qwen2-72B-Instruct-math
|
jclian91
| 2024-09-14T04:14:46Z | 13 | 4 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:bsd",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-02T02:16:21Z |
---
license: bsd
---
Welcome to Qwen2-72B-Instruct-math model, which is used for solving Math Problem.
<div align="center">
<h1>Welcome to LLM Math Solver</h1>
<h4 align="center">
<a href="https://percent4.github.io/llm_math_solver/"><img src="https://img.shields.io/badge/📄-docs-000000?style=for-the-badge&colorA=09c&colorB=555" height='35px' alt="Docs"></a>
</h4>
<p>LLM Math Solver: using LLM to solve MATH problems.
</p>
<h1></h1>
</div>
本项目已经在Github上开源,网址为:[https://github.com/percent4/llm_math_solver](https://github.com/percent4/llm_math_solver) ,更多内容可参考文档:[https://percent4.github.io/llm_math_solver/](https://percent4.github.io/llm_math_solver/) 。
## 评估结果
不同模型经过微调的数学能力测评表如下:
| 基座模型 | GSM8K | MATH | 样本数 |
|---------------------|--------|--------|------|
| QWen1.5-32B | 79.68% | 43.58% | 2402 |
| Yi-1.5-34B | 83.47% | 52.76% | 3480 |
| Yi-1.5-34B-Chat | 85.67% | 57.22% | 3479 |
| QWen-2-72B-Instruct | 93.03% | 68.54% | 3469 |
| QWen-2-72B-Instruct | **93.56%** | **69.66%** | 4799 |
其它模型:
|模型|GSM8K | MATH|
|---|---|---|
|GPT-4o-0513|95.8%|76.6%|
|Claude-3.5-Sonnet|96.4%|71.1%|
|GEMINI-1.5-PRO(May 2024)|/|67.7%|
|DeepSeek-Coder-V2-Instruct(236B)|94.9%|75.7%|
## 使用方法
1. 使用vLLM部署
命令如下:
```bash
CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model /workspace/models/Qwen2-72B-Instruct-math --served-model-name Qwen2-72B-Instruct-math --gpu-memory-utilization 0.95 --max-model-len 8192 --dtype auto --api-key token-abc123 --tensor-parallel-size 2
```
将--model参数后面的模型路径替换成你本地路径,或者直接使用项目名称。
也可以使用LLaMA-Factory框架提供的api部署命令提供模型推理服务。
**注意**:需使用两张80G显存的A100才能部署。
2. 使用Python调用
**注意**:该模型解数学题的系统人设(System Prompt)为:你是一个数学解题大师,请解决下面的数学题,给出思考过程,必要时需要给出解题过程中的Python代码。正确答案的数值用\\boxed{}包围起来,最终的答案以因此开头,不要讲多余的废话。
```python
# -*- coding: utf-8 -*-
# @file: infer.py
import os
import re
import subprocess
from openai import OpenAI
from random import choices
os.environ["OPENAI_BASE_URL"] = "http://localhost:8000/v1"
os.environ["OPENAI_API_KEY"] = "token-abc123"
client = OpenAI()
execution_desc = ["运行以上代码,输出会是: ",
"现在将上面的代码复制到Python环境中运行,运行结果为:",
"执行上述Python代码,运行结果将是:",
"上面的Python代码执行结果为:",
"运行上述代码,我们可以得到题目要求的答案。输出结果将是:"]
query = "一列火车经过南京长江大桥,大桥长6700米,这列火车长140米,火车每分钟行400米,这列火车通过长江大桥需要多少分钟?"
messages = [{"role": "system","content": "你是一个数学解题大师,请解决下面的数学题,给出思考过程,必要时需要给出解题过程中的Python代码。正确答案的数值用\\boxed{}包围起来,最终的答案以因此开头,不要讲多余的废话。"}]
messages.append({"role": "user", "content": f"题目:{query}"})
result = client.chat.completions.create(messages=messages,
model="Qwen2-72B-Instruct-math",
temperature=0.2,
stream=True)
reply_message = ""
for chunk in result:
if hasattr(chunk, "choices") and chunk.choices[0].delta.content:
reply_message += chunk.choices[0].delta.content
# find python code and execute the code
if '```python' in reply_message and '\n```' in reply_message:
messages.append({"role": "assistant", "content": '```'.join(reply_message.split('```')[:-1]) + '```'})
python_code_string = re.findall(r'```python\n(.*?)\n```', reply_message, re.S)[0]
python_file_path = 'temp.py'
with open(python_file_path, 'w') as f:
f.write(python_code_string)
python_code_run = subprocess.run(['python3', python_file_path], stdout=subprocess.PIPE, timeout=10)
if python_code_run.returncode:
raise RuntimeError("生成的Python代码无法运行!")
python_code_execution = python_code_run.stdout.decode('utf-8')
os.remove(python_file_path)
code_reply_str = choices(execution_desc, k=1)[0]
code_reply = f"\n{code_reply_str}```{python_code_execution.strip()}```\n"
reply_message += code_reply
messages.append({"role": "user", "content": code_reply})
result = client.chat.completions.create(messages=messages,
model="Qwen2-72B-Instruct-math",
temperature=0.2,
stream=True)
final_reply = ""
for chunk in result:
if hasattr(chunk, "choices") and chunk.choices[0].delta.content:
reply_message += chunk.choices[0].delta.content
final_reply += chunk.choices[0].delta.content
print(reply_message.replace('```python', '\n```python'))
```
3. 或者 使用Open WebUI中的Pipelines,提供推理页面
具体使用方法见参考文献中的 4. [Open WebUI的Pipelines学习之使用大模型解数学题](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247487013&idx=1&sn=6a6786ba8c8c7cfdbc02ef558adefe71&chksm=fcb9b7b5cbce3ea37f8fb61e743d0ea0a7d4f5d6b8e8b2c7a80171a5c8c217524d8f307c0146&token=120899150&lang=zh_CN#rd) 。







## 参考文献
关于该模型使用的训练数据、训练方法和相关文章,可以参考Github上项目: [llm_math_solver](https://github.com/percent4/llm_math_solver).
文章如下:
1. [NLP(九十七)大模型数学解题能力的初步探索](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247486824&idx=1&sn=fd6b36cf78aead227359606a7270516d&chksm=fcb9b4f8cbce3dee332335092f576c703ccdc55598cf45cb7f483f822ba5c72590019384d12a&token=321761101&lang=zh_CN#rd)
2. [NLP(九十九)大模型的数学能力微调及测评](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247486889&idx=1&sn=27c1a40d3af462f43a80a1ed401843f6&chksm=fcb9b439cbce3d2fd73e753618e0b32027314648eb13dc8b48bb9e713ad5313777c1ef27ce46&token=390124673&lang=zh_CN#rd)
3. [NLP(一百)大模型数学能力测评](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247486909&idx=1&sn=31b01bd4155b2c9ca15e2a7ae9f4de15&chksm=fcb9b42dcbce3d3bb473cf138f0f0f9a71addeff934900d155b6b90fb2a5857c1926b8aa0e9d&token=584142844&lang=zh_CN#rd)
4. [Open WebUI的Pipelines学习之使用大模型解数学题](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247487013&idx=1&sn=6a6786ba8c8c7cfdbc02ef558adefe71&chksm=fcb9b7b5cbce3ea37f8fb61e743d0ea0a7d4f5d6b8e8b2c7a80171a5c8c217524d8f307c0146&token=120899150&lang=zh_CN#rd)
5. [笔记:大模型数学解题能力](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247487038&idx=1&sn=ae458cbb6d9f23fb04229bd18961449d&chksm=fcb9b7aecbce3eb800f9b80de1c2931660b7ce1ea103f44759ed179638bad5711d357757f568&token=1938218370&lang=zh_CN#rd)
6. [NLP(一百零六)GSM8K测试集中答案错误的4道题目](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247487146&idx=1&sn=6a6fc931b76b2db3414c3208e26fe5a8&chksm=fcb9b73acbce3e2cb48fd2348d8e2225b620b93e229ecf17ac26e0982b03b7097bee529a51d4&token=552536245&lang=zh_CN#rd)
7. [NLP(一百零七)大模型解答高考数学题评测实验](https://mp.weixin.qq.com/s?__biz=MzU2NTYyMDk5MQ==&mid=2247487202&idx=1&sn=da3ad2b629b6033cacb0724349c8f7e4&chksm=fcb9b772cbce3e64c432b09d25bbcf1253b6ba78af91565eba91de189cce1af89dc72e443968&token=410216179&lang=zh_CN#rd)
|
erberry/bert-base-multilingual-uncased-finetuned-keyword
|
erberry
| 2024-09-14T04:12:08Z | 18 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-13T06:35:12Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-multilingual-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-multilingual-uncased-finetuned-keyword
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-finetuned-keyword
This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7290
- Accuracy: 0.0036
- Precision: 0.0015
- Recall: 0.0036
- F1: 0.0017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 269 | 6.7517 | 0.0012 | 0.0000 | 0.0012 | 0.0000 |
| 6.7625 | 2.0 | 538 | 6.7499 | 0.0012 | 0.0000 | 0.0012 | 0.0000 |
| 6.7625 | 3.0 | 807 | 6.7366 | 0.0024 | 0.0003 | 0.0024 | 0.0005 |
| 6.7465 | 4.0 | 1076 | 6.7290 | 0.0036 | 0.0015 | 0.0036 | 0.0017 |
| 6.7465 | 5.0 | 1345 | 6.7276 | 0.0030 | 0.0015 | 0.0030 | 0.0013 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-6.9b-deduped-int3-step93000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T03:55:17Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T03:48:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/arco-reasoner-v1.1
|
appvoid
| 2024-09-14T03:47:25Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:h2oai/h2o-danube3-500m-base",
"base_model:finetune:h2oai/h2o-danube3-500m-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T03:47:08Z |
---
base_model: h2oai/h2o-danube3-500m-base
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** appvoid
- **License:** apache-2.0
- **Finetuned from model :** h2oai/h2o-danube3-500m-base
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SirawitC/finetuned-WangchanBERTa-TSCC-property-HPTuned
|
SirawitC
| 2024-09-14T03:40:42Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:airesearch/wangchanberta-base-att-spm-uncased",
"base_model:finetune:airesearch/wangchanberta-base-att-spm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-14T03:40:23Z |
---
library_name: transformers
base_model: airesearch/wangchanberta-base-att-spm-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-WangchanBERTa-TSCC-property-HPTuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-WangchanBERTa-TSCC-property-HPTuned
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 82 | 0.2399 | 0.9390 |
| No log | 2.0 | 164 | 0.5469 | 0.9024 |
| No log | 3.0 | 246 | 0.2480 | 0.9451 |
| No log | 4.0 | 328 | 0.2242 | 0.9451 |
| No log | 5.0 | 410 | 0.2168 | 0.9451 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
newsletter/LLaMA2-13B-Tiefighter-Q8_0-GGUF
|
newsletter
| 2024-09-14T03:16:10Z | 5 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:KoboldAI/LLaMA2-13B-Tiefighter",
"base_model:quantized:KoboldAI/LLaMA2-13B-Tiefighter",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-09-14T03:13:28Z |
---
base_model: KoboldAI/LLaMA2-13B-Tiefighter
license: llama2
tags:
- llama-cpp
- gguf-my-repo
---
# newsletter/LLaMA2-13B-Tiefighter-Q8_0-GGUF
This model was converted to GGUF format from [`KoboldAI/LLaMA2-13B-Tiefighter`](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo newsletter/LLaMA2-13B-Tiefighter-Q8_0-GGUF --hf-file llama2-13b-tiefighter-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo newsletter/LLaMA2-13B-Tiefighter-Q8_0-GGUF --hf-file llama2-13b-tiefighter-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo newsletter/LLaMA2-13B-Tiefighter-Q8_0-GGUF --hf-file llama2-13b-tiefighter-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo newsletter/LLaMA2-13B-Tiefighter-Q8_0-GGUF --hf-file llama2-13b-tiefighter-q8_0.gguf -c 2048
```
|
John6666/suimix-xl-v10-sdxl
|
John6666
| 2024-09-14T02:37:21Z | 38 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"art",
"girls",
"2D",
"style",
"en",
"base_model:Hemlok/SuiMix-XL",
"base_model:finetune:Hemlok/SuiMix-XL",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-14T02:31:59Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- art
- girls
- 2D
- style
base_model: Hemlok/SuiMix-XL
---
Original model is [here](https://huggingface.co/Hemlok/SuiMix-XL) and on [Civitai](https://civitai.com/models/751353/suimix-xl?modelVersionId=840215).
This model created by [Hemlok](https://huggingface.co/Hemlok).
|
Siddartha10/outputs_cpo
|
Siddartha10
| 2024-09-14T02:36:27Z | 124 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"cpo",
"generated_from_trainer",
"conversational",
"base_model:Siddartha10/epoch_1",
"base_model:finetune:Siddartha10/epoch_1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-14T01:01:16Z |
---
library_name: transformers
license: apache-2.0
base_model: Siddartha10/epoch_1
tags:
- trl
- cpo
- generated_from_trainer
model-index:
- name: outputs_cpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs_cpo
This model is a fine-tuned version of [Siddartha10/epoch_1](https://huggingface.co/Siddartha10/epoch_1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Thebull/my-gemma-2-finetuned-model
|
Thebull
| 2024-09-14T02:35:28Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-08-17T08:24:42Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/Evolutions-Reflex-GGUF
|
QuantFactory
| 2024-09-14T02:12:27Z | 18 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:ClaudioItaly/Evolutionstory-7B-v2.2",
"base_model:merge:ClaudioItaly/Evolutionstory-7B-v2.2",
"base_model:nbeerbower/MaidFlameSoup-7B",
"base_model:merge:nbeerbower/MaidFlameSoup-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-09-14T01:35:35Z |
---
base_model:
- nbeerbower/MaidFlameSoup-7B
- ClaudioItaly/Evolutionstory-7B-v2.2
library_name: transformers
tags:
- mergekit
- merge
---
[](https://hf.co/QuantFactory)
# QuantFactory/Evolutions-Reflex-GGUF
This is quantized version of [ClaudioItaly/Evolutions-Reflex](https://huggingface.co/ClaudioItaly/Evolutions-Reflex) created using llama.cpp
# Original Model Card
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/MaidFlameSoup-7B](https://huggingface.co/nbeerbower/MaidFlameSoup-7B)
* [ClaudioItaly/Evolutionstory-7B-v2.2](https://huggingface.co/ClaudioItaly/Evolutionstory-7B-v2.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/MaidFlameSoup-7B
layer_range: [0, 32]
- model: ClaudioItaly/Evolutionstory-7B-v2.2
layer_range: [0, 32]
merge_method: slerp
base_model: ClaudioItaly/Evolutionstory-7B-v2.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
sanjay920/Phi-3.5-mini-instruct
|
sanjay920
| 2024-09-14T01:32:51Z | 21 | 1 | null |
[
"safetensors",
"gguf",
"phi3",
"llama-factory",
"freeze",
"generated_from_trainer",
"custom_code",
"license:other",
"region:us"
] | null | 2024-09-09T18:46:29Z |
---
license: other
base_model: models/Phi-3.5-mini-instruct-pro-4
tags:
- llama-factory
- freeze
- generated_from_trainer
model-index:
- name: function_calling_post_filtering_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# function_calling_post_filtering_v4
This model is a fine-tuned version of [models/Phi-3.5-mini-instruct-pro-4](https://huggingface.co/models/Phi-3.5-mini-instruct-pro-4) on the function_calling_post_filtering_v4, the function_calling_post_filtering_v4, the function_calling_post_filtering_v4, the function_calling_post_filtering_v4, the function_calling_post_filtering_v4, the mmlu_pro_training and the WildChat_116k_functions datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.43.4
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Judah04/SpeechT5-Hausa-2
|
Judah04
| 2024-09-14T01:14:25Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-09-14T00:18:35Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: SpeechT5-Hausa-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5-Hausa-2
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5658 | 7.3733 | 200 | 0.5169 |
| 0.5266 | 14.7465 | 400 | 0.5300 |
| 0.4989 | 22.1198 | 600 | 0.4869 |
| 0.4747 | 29.4931 | 800 | 0.4763 |
| 0.4571 | 36.8664 | 1000 | 0.4736 |
| 0.4515 | 44.2396 | 1200 | 0.4751 |
| 0.4385 | 51.6129 | 1400 | 0.4884 |
| 0.4333 | 58.9862 | 1600 | 0.4969 |
| 0.429 | 66.3594 | 1800 | 0.5048 |
| 0.4198 | 73.7327 | 2000 | 0.5086 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
ISTA-DASLab/Meta-Llama-3-70B-AQLM-PV-1Bit-1x16
|
ISTA-DASLab
| 2024-09-14T00:52:00Z | 38 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-3",
"conversational",
"text-generation-inference",
"arxiv:2405.14852",
"arxiv:2401.06118",
"autotrain_compatible",
"endpoints_compatible",
"aqlm",
"region:us"
] |
text-generation
| 2024-06-05T17:50:01Z |
---
library_name: transformers
tags:
- llama
- facebook
- meta
- llama-3
- conversational
- text-generation-inference
---
An official quantization of [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118) .
For this quantization, we used 1 codebook of 16 bits for groups of 16 weights.
**The 1x16g16 models require aqlm inference library v1.1.6 or newer:**
`pip install aqlm[gpu,cpu]>=1.1.6`
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
| meta-llama/Meta-Llama-3-8B | 1x16g8 | 6.99 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-8B-AQLM-PV-2Bit-1x16) |
| meta-llama/Meta-Llama-3-8B | 1x16g16 | 9.43 | 3.9 | [Link](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-8B-AQLM-PV-1Bit-1x16) |
| meta-llama/Meta-Llama-3-70B | 1x16g8 | 4.57 | 21.9 | [Link](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-70B-AQLM-PV-2Bit-1x16)|
| meta-llama/Meta-Llama-3-70B (this) | 1x16g16 | 8.67 | 13 | [Link](https://huggingface.co/ISTA-DASLab/Meta-Llama-3-70B-AQLM-PV-2Bit-1x16)|
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
The original code for PV-Tuning can be found in the [AQLM@pv-tuning](https://github.com/Vahe1994/AQLM/tree/pv-tuning) branch.
|
John6666/nova-reality-v50-sdxl
|
John6666
| 2024-09-14T00:49:48Z | 466 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photo",
"fantasy",
"hentai",
"landscape",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-14T00:45:26Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photo
- fantasy
- hentai
- landscape
- pony
---
Original model is [here](https://civitai.com/models/453428/nova-reality?modelVersionId=839396).
This model created by [Crody](https://civitai.com/user/Crody).
|
distily/distily_distsmollm_long
|
distily
| 2024-09-14T00:48:50Z | 20 | 0 |
Distily
|
[
"Distily",
"tensorboard",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:wikimedia/wikipedia",
"base_model:HuggingFaceTB/SmolLM-135M",
"base_model:finetune:HuggingFaceTB/SmolLM-135M",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-09-12T20:35:53Z |
---
base_model: HuggingFaceTB/SmolLM-135M
datasets:
- wikimedia/wikipedia
library_name: Distily
license: creativeml-openrail-m
tags:
- generated_from_trainer
- Distily
base_model_relation: finetune
model-index:
- name: distily_distsmollm_long
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [HuggingFaceTB/SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment.
# Model description
More information needed
# Intended uses & limitations
More information needed
-->
# Model Architecture:
- **Architecture**: `LlamaForCausalLM`
- **Total Parameters**: 81,413,568
- **Data Type (dtype)**: torch.float32
- **Model Size**: 0.30 GB
<details>
<summary>Student Model Details</summary>
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
(0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
(v_proj): Linear(in_features=576, out_features=192, bias=False)
(o_proj): Linear(in_features=576, out_features=576, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LigerSwiGLUMLP(
(gate_proj): Linear(in_features=576, out_features=1536, bias=False)
(up_proj): Linear(in_features=576, out_features=1536, bias=False)
(down_proj): Linear(in_features=1536, out_features=576, bias=False)
)
(input_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(post_attention_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
)
)
(norm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=576, out_features=49152, bias=False)
)
```
</details>
<br/>
# Resource Usage
- Max Train VRAM Use: 13.4793 GB
- Available VRAM: 23.6497 GB
- GPUs:
- 1x NVIDIA GeForce RTX 4090
- CPUs: 48
- CPU Memory: 251.5386 GB
- CPU Memory Bandwidth: 1200 GB/s
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `LlamaForCausalLM` -> `LlamaForCausalLM`
- **Total Parameters**: 134,515,008 -> 81,413,568
- **Data Type (dtype)**: torch.float32 -> torch.float32
- **Model Size**: 0.25 GB -> 0.30 GB
<details>
<summary>Module Diff Details</summary>
```diff
--- teacher model modules
+++ student model modules
@@ -2,7 +2,7 @@
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
- (0-29): 30 x LlamaDecoderLayer(
+ (0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
@@ -10,17 +10,16 @@
(o_proj): Linear(in_features=576, out_features=576, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
- (mlp): LlamaMLP(
+ (mlp): LigerSwiGLUMLP(
(gate_proj): Linear(in_features=576, out_features=1536, bias=False)
(up_proj): Linear(in_features=576, out_features=1536, bias=False)
(down_proj): Linear(in_features=1536, out_features=576, bias=False)
- (act_fn): SiLU()
)
- (input_layernorm): LlamaRMSNorm((576,), eps=1e-05)
- (post_attention_layernorm): LlamaRMSNorm((576,), eps=1e-05)
+ (input_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
+ (post_attention_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
)
)
- (norm): LlamaRMSNorm((576,), eps=1e-05)
+ (norm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=576, out_features=49152, bias=False)
```
</details>
<br/>
# Train Dataset
Trained on 706,573,563 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `998,000`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(
logits_loss_component=LossComponent(
weight=1,
loss_fn='kl'
),
hs_loss_component=LossComponent(
weight=0
),
attn_loss_component=LossComponent(
weight=0
)
)
```
# Hyperparameters
The following hyperparameters were used during training:
<details>
<summary>Expand</summary>
- learning_rate: `0.0002`
- train_batch_size: `4`
- eval_batch_size: `2`
- seed: `42`
- gradient_accumulation_steps: `2`
- total_train_batch_size: `8`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `polynomial`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(
logits_loss_component=LossComponent(
weight=1,
loss_fn='kl'
),
hs_loss_component=LossComponent(
weight=0
),
attn_loss_component=LossComponent(
weight=0
)
)`
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x718c02862f80>`
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `{'num_hidden_layers': 15}`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `False`
- student_use_liger_kernel: `True`
- teacher_model_name_or_path: `HuggingFaceTB/SmolLM-135M`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `1000000`
- dataset_test_size: `0.002`
- dataset_shuffle: `False`
- dataset_shuffle_seed: `42`
- dataset_trust_remote_code: `False`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.0`
- warmup_steps: `0`
- gradient_checkpointing: `True`
</details>
<br/>
# Framework Versions
- Distily 0.5.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
|
Xu-Ouyang/pythia-6.9b-deduped-int4-step71000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-14T00:36:09Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-14T00:27:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onzi-suba/mixedbread-ai_mxbai-rerank-base-v1_V1___2024-09-13_19-28-18
|
onzi-suba
| 2024-09-14T00:32:09Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-14T00:31:47Z |
---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yasmineee/NLLB-600M-FFT
|
yasmineee
| 2024-09-14T00:23:16Z | 6 | 0 | null |
[
"safetensors",
"m2m_100",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-09-14T00:21:45Z |
---
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
metrics:
- bleu
- rouge
model-index:
- name: NLLB-600M-FFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLLB-600M-FFT
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3513
- Bleu: 35.7724
- Rouge: 0.5734
- Gen Len: 16.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 1.9858 | 1.0 | 250 | 1.3645 | 35.126 | 0.5746 | 16.7 |
| 1.1589 | 2.0 | 500 | 1.3468 | 36.7577 | 0.5841 | 17.0312 |
| 0.9961 | 3.0 | 750 | 1.3513 | 35.7724 | 0.5734 | 16.8375 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF
|
mradermacher
| 2024-09-14T00:12:07Z | 288 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/LLama-3.1-8B-HyperNova-abliteration",
"base_model:quantized:bunnycore/LLama-3.1-8B-HyperNova-abliteration",
"endpoints_compatible",
"region:us"
] | null | 2024-09-13T15:48:27Z |
---
base_model: bunnycore/LLama-3.1-8B-HyperNova-abliteration
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/LLama-3.1-8B-HyperNova-abliteration
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LLama-3.1-8B-HyperNova-abliteration-GGUF/resolve/main/LLama-3.1-8B-HyperNova-abliteration.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
John6666/pornworks-sexy-beauty-v04-sdxl
|
John6666
| 2024-09-13T23:58:59Z | 14,036 | 14 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photo",
"cinematic",
"esthetic",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-13T23:37:31Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photo
- cinematic
- esthetic
---
Original model is [here](https://civitai.com/models/730895?modelVersionId=838442).
This model created by [pornworksai](https://civitai.com/user/pornworksai).
|
wrenth04/pornworks-sexy-beauty-v04-sdxl
|
wrenth04
| 2024-09-13T23:58:59Z | 10,842 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photo",
"cinematic",
"esthetic",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-25T06:44:06Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photo
- cinematic
- esthetic
---
Original model is [here](https://civitai.com/models/730895?modelVersionId=838442).
This model created by [pornworksai](https://civitai.com/user/pornworksai).
|
Xu-Ouyang/pythia-6.9b-deduped-int3-step71000-GPTQ-wikitext2-uva
|
Xu-Ouyang
| 2024-09-13T23:47:49Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-09-13T23:40:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiyeonkim/llava-tulu2sft-dpo_safeRLHF
|
jiyeonkim
| 2024-09-13T23:43:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-13T03:38:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
byroneverson/Yi-1.5-34B-Chat-abliterated-gguf
|
byroneverson
| 2024-09-13T23:21:28Z | 6 | 1 |
transformers
|
[
"transformers",
"gguf",
"llm",
"long context",
"yi",
"chat",
"it",
"abliterated",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:byroneverson/Yi-1.5-34B-Chat-abliterated",
"base_model:quantized:byroneverson/Yi-1.5-34B-Chat-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-13T22:59:54Z |
---
base_model: byroneverson/Yi-1.5-34B-Chat-abliterated
language:
- zh
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- llm
- long context
- yi
- chat
- it
- abliterated
- llama-cpp
- gguf-my-repo
- gguf
---
# byroneverson/Yi-1.5-34B-Chat-abliterated-gguf
This model was converted to GGUF format from [`byroneverson/Yi-1.5-34B-Chat-abliterated`](https://huggingface.co/byroneverson/Yi-1.5-34B-Chat-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/byroneverson/Yi-1.5-34B-Chat-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo byroneverson/Yi-1.5-34B-Chat-abliterated-Q4_K_M-GGUF --hf-file yi-1.5-34b-chat-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo byroneverson/Yi-1.5-34B-Chat-abliterated-Q4_K_M-GGUF --hf-file yi-1.5-34b-chat-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo byroneverson/Yi-1.5-34B-Chat-abliterated-Q4_K_M-GGUF --hf-file yi-1.5-34b-chat-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo byroneverson/Yi-1.5-34B-Chat-abliterated-Q4_K_M-GGUF --hf-file yi-1.5-34b-chat-abliterated-q4_k_m.gguf -c 2048
```
|
vevinkumar/product_category_model
|
vevinkumar
| 2024-09-13T23:09:23Z | 82 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-13T23:07:12Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** vevinkumar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Aptronym/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-Q8_0-GGUF
|
Aptronym
| 2024-09-13T23:03:47Z | 12 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-13T23:03:28Z |
---
base_model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# Aptronym/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-Q8_0-GGUF
This model was converted to GGUF format from [`ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1`](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Aptronym/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-Q8_0-GGUF --hf-file phi-3.5-mini-3.8b-arliai-rpmax-v1.1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Aptronym/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-Q8_0-GGUF --hf-file phi-3.5-mini-3.8b-arliai-rpmax-v1.1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Aptronym/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-Q8_0-GGUF --hf-file phi-3.5-mini-3.8b-arliai-rpmax-v1.1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Aptronym/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-Q8_0-GGUF --hf-file phi-3.5-mini-3.8b-arliai-rpmax-v1.1-q8_0.gguf -c 2048
```
|
habdine/Prot2Text-Large-v1-0
|
habdine
| 2024-09-13T22:32:20Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"prot2text",
"feature-extraction",
"Causal Language Modeling",
"GPT2",
"ESM2",
"Proteins",
"GNN",
"text-generation",
"custom_code",
"en",
"dataset:habdine/Prot2Text-Data",
"arxiv:2307.14367",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2024-09-13T15:26:09Z |
---
tags:
- Causal Language Modeling
- GPT2
- ESM2
- Proteins
- GNN
library_name: transformers
pipeline_tag: text-generation
language:
- en
license: cc-by-nc-4.0
datasets:
- habdine/Prot2Text-Data
metrics:
- bertscore
- bleu
- rouge
---
# Prot2Text Model Card

## Model Information
**Model Page:** [Prot2Text](http://nlp.polytechnique.fr/prot2text#proteins) <br>
**Paper:** [https://arxiv.org/abs/2307.14367](https://arxiv.org/abs/2307.14367) <br>
**Github:** [https://github.com/hadi-abdine/Prot2Text](https://github.com/hadi-abdine/Prot2Text) <br>
**Authors:** Hadi Abdine<sup>(1)</sup>, Michail Chatzianastasis<sup>(1)</sup>, Costas Bouyioukos<sup>(2, 3)</sup>, Michalis Vazirgiannis<sup>(1)</sup><br>
<sup>**(1)**</sup>DaSciM, LIX, École Polytechnique, Institut Polytechnique de Paris, France.<br>
<sup>**(2)**</sup>Epigenetics and Cell Fate, CNRS UMR7216, Université Paris Cité, Paris, France.<br>
<sup>**(3)**</sup>Bioinformatics Research Laboratory, Department of Biological Sciences, University of Cyprus, Nicosia, Cyprus.<br>
**Prot2Text** paper is published in **AAAI 2024**. Preliminary versions of the paper were accepted as a spotlight at [DGM4H@NeurIPS 2023](https://sites.google.com/ethz.ch/dgm4h-neurips2023/home?authuser=0) and [AI4Science@NeurIPS 2023](https://ai4sciencecommunity.github.io/neurips23.html).
```
@inproceedings{abdine2024prot2text,
title={Prot2Text: Multimodal Protein's Function Generation with GNNs and Transformers},
author={Abdine, Hadi and Chatzianastasis, Michail and Bouyioukos, Costas and Vazirgiannis, Michalis},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
pages={10757--10765},
year={2024}
}
```
### Description
Prot2Text is a family of models that predict a protein's function in a free text style, moving beyond the conventional binary or categorical classifications. By combining Graph Neural Networks(GNNs) and Large Language Models(LLMs), in an encoder-decoder framework. Prot2Text effectively integrates diverse data types including protein sequence, structure, and textual annotation and description. This multimodal approach allows for a holistic representation of proteins' functions, enabling the generation of detailed and accurate functional descriptions.
Prot2Text is trained on a [multimodal dataset](https://huggingface.co/datasets/habdine/Prot2Text-Data) that consists of 256,690 proteins. For each protein, we have three information: the correspond- ing sequence, the AlphaFold accession ID and the textual description. To build this dataset, we used the SwissProt database the only curated proteins knowledge base with full proteins’ textual description included in the UniProtKB Consortium (2016) Release 2022_04.
### Models and Results
| Model | #params | BLEU Score | ROUGE-1 | ROUGE-2 | ROUGE-L | BERT Score | Link |
|:--------------------------:|:--------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|
| Prot2Text<sub>SMALL</sub> | 256M | 30.01 | 45.78 | 38.08 | 43.97 | 82.60 | [v1.0](https://huggingface.co/habdine/Prot2Text-Small-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Small-v1-1) |
| Prot2Text<sub>BASE</sub> | 283M | 35.11 | 50.59 | 42.71 | 48.49 | 84.30 | [v1.0](https://huggingface.co/habdine/Prot2Text-Base-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Base-v1-1) |
| Prot2Text<sub>MEDIUM</sub>| 398M | 36.51 | 52.13 | 44.17 | 50.04 | 84.83 | [v1.0](https://huggingface.co/habdine/Prot2Text-Medium-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Medium-v1-1) |
| Prot2Text<sub>LARGE</sub> | 898M | 36.29 | 53.68 | 45.60 | 51.40 | 85.20 | [v1.0](https://huggingface.co/habdine/Prot2Text-Large-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Large-v1-1) |
| Esm2Text<sub>BASE</sub> | 225M | 32.11 | 47.46 | 39.18 | 45.31 | 83.21 | [v1.0](https://huggingface.co/habdine/Esm2Text-Base-v1-0)- [v1.1](https://huggingface.co/habdine/Esm2Text-Base-v1-1) |
The reported results are computed using v1.0
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library, graphein, DSSP, torch and torch geometric with:
```sh
pip install -U transformers
git clone https://github.com/a-r-j/graphein.git
pip install -e graphein/
pip install torch
pip install torch_geometric
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv
sudo apt-get install dssp
sudo ln -s /usr/bin/mkdssp /usr/bin/dssp
```
You might need to install different versions/variants according to your environnement.
Then, copy the snippet from the section that is relevant for your usecase.
#### Running Prot2Text to generate a protein's function using both its structure and sequence
To generate a protein's function using both its structure and amino-acid sequence, you need to load one of Prot2Text models and choose the AlphaFold database ID of the protein.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('habdine/Prot2Text-Base-v1-1',
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('habdine/Prot2Text-Base-v1-1',
trust_remote_code=True)
function = model.generate_protein_description(protein_pdbID='Q10MK9',
tokenizer=tokenizer,
device='cuda' # replace with 'mps' to run on a Mac device
)
print(function)
# 'Carboxylate--CoA ligase that may use 4-coumarate as substrate. Follows a two-step reaction mechanism, wherein the carboxylate substrate first undergoes adenylation by ATP, followed by a thioesterification in the presence of CoA to yield the final CoA thioester.'
```
<br>
#### Running Esm2Text to generate a protein's function using only its sequence
To generate a protein's function using only its amino-acid sequence, you need to load Esm2Text-Base model and pass an amino-acid sequence.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('habdine/Esm2Text-Base-v1-1',
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('habdine/Esm2Text-Base-v1-1',
trust_remote_code=True)
function = model.generate_protein_description(protein_sequence='AEQAERYEEMVEFMEKL',
tokenizer=tokenizer,
device='cuda' # replace with 'mps' to run on a Mac device
)
print(function)
# 'A cytochrome b6-f complex catalyzes the calcium-dependent hydrolysis of the 2-acyl groups in 3-sn-phosphoglycerides. Its physiological function is not known.'
```
<br>
## Notice
THE INFORMATION PROVIDED IS THEORETICAL MODELLING ONLY AND CAUTION SHOULD BE EXERCISED IN ITS USE. IT IS PROVIDED "AS-IS" WITHOUT ANY WARRANTY OF ANY KIND, WHETHER EXPRESSED OR IMPLIED. NO WARRANTY IS GIVEN THAT USE OF THE INFORMATION SHALL NOT INFRINGE THE RIGHTS OF ANY THIRD PARTY. THE INFORMATION IS NOT INTENDED TO BE A SUBSTITUTE FOR PROFESSIONAL MEDICAL ADVICE, DIAGNOSIS, OR TREATMENT, AND DOES NOT CONSTITUTE MEDICAL OR OTHER PROFESSIONAL ADVICE.
|
habdine/Prot2Text-Medium-v1-1
|
habdine
| 2024-09-13T22:31:34Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"prot2text",
"feature-extraction",
"Causal Language Modeling",
"GPT2",
"ESM2",
"Proteins",
"GNN",
"text-generation",
"custom_code",
"en",
"dataset:habdine/Prot2Text-Data",
"arxiv:2307.14367",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2024-09-13T09:24:11Z |
---
tags:
- Causal Language Modeling
- GPT2
- ESM2
- Proteins
- GNN
library_name: transformers
pipeline_tag: text-generation
language:
- en
license: cc-by-nc-4.0
datasets:
- habdine/Prot2Text-Data
metrics:
- bertscore
- bleu
- rouge
---
# Prot2Text Model Card

## Model Information
**Model Page:** [Prot2Text](http://nlp.polytechnique.fr/prot2text#proteins) <br>
**Paper:** [https://arxiv.org/abs/2307.14367](https://arxiv.org/abs/2307.14367) <br>
**Github:** [https://github.com/hadi-abdine/Prot2Text](https://github.com/hadi-abdine/Prot2Text) <br>
**Authors:** Hadi Abdine<sup>(1)</sup>, Michail Chatzianastasis<sup>(1)</sup>, Costas Bouyioukos<sup>(2, 3)</sup>, Michalis Vazirgiannis<sup>(1)</sup><br>
<sup>**(1)**</sup>DaSciM, LIX, École Polytechnique, Institut Polytechnique de Paris, France.<br>
<sup>**(2)**</sup>Epigenetics and Cell Fate, CNRS UMR7216, Université Paris Cité, Paris, France.<br>
<sup>**(3)**</sup>Bioinformatics Research Laboratory, Department of Biological Sciences, University of Cyprus, Nicosia, Cyprus.<br>
**Prot2Text** paper is published in **AAAI 2024**. Preliminary versions of the paper were accepted as a spotlight at [DGM4H@NeurIPS 2023](https://sites.google.com/ethz.ch/dgm4h-neurips2023/home?authuser=0) and [AI4Science@NeurIPS 2023](https://ai4sciencecommunity.github.io/neurips23.html).
```
@inproceedings{abdine2024prot2text,
title={Prot2Text: Multimodal Protein's Function Generation with GNNs and Transformers},
author={Abdine, Hadi and Chatzianastasis, Michail and Bouyioukos, Costas and Vazirgiannis, Michalis},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
pages={10757--10765},
year={2024}
}
```
### Description
Prot2Text is a family of models that predict a protein's function in a free text style, moving beyond the conventional binary or categorical classifications. By combining Graph Neural Networks(GNNs) and Large Language Models(LLMs), in an encoder-decoder framework. Prot2Text effectively integrates diverse data types including protein sequence, structure, and textual annotation and description. This multimodal approach allows for a holistic representation of proteins' functions, enabling the generation of detailed and accurate functional descriptions.
Prot2Text is trained on a [multimodal dataset](https://huggingface.co/datasets/habdine/Prot2Text-Data) that consists of 256,690 proteins. For each protein, we have three information: the correspond- ing sequence, the AlphaFold accession ID and the textual description. To build this dataset, we used the SwissProt database the only curated proteins knowledge base with full proteins’ textual description included in the UniProtKB Consortium (2016) Release 2022_04.
### Models and Results
| Model | #params | BLEU Score | ROUGE-1 | ROUGE-2 | ROUGE-L | BERT Score | Link |
|:--------------------------:|:--------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|
| Prot2Text<sub>SMALL</sub> | 256M | 30.01 | 45.78 | 38.08 | 43.97 | 82.60 | [v1.0](https://huggingface.co/habdine/Prot2Text-Small-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Small-v1-1) |
| Prot2Text<sub>BASE</sub> | 283M | 35.11 | 50.59 | 42.71 | 48.49 | 84.30 | [v1.0](https://huggingface.co/habdine/Prot2Text-Base-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Base-v1-1) |
| Prot2Text<sub>MEDIUM</sub>| 398M | 36.51 | 52.13 | 44.17 | 50.04 | 84.83 | [v1.0](https://huggingface.co/habdine/Prot2Text-Medium-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Medium-v1-1) |
| Prot2Text<sub>LARGE</sub> | 898M | 36.29 | 53.68 | 45.60 | 51.40 | 85.20 | [v1.0](https://huggingface.co/habdine/Prot2Text-Large-v1-0)- [v1.1](https://huggingface.co/habdine/Prot2Text-Large-v1-1) |
| Esm2Text<sub>BASE</sub> | 225M | 32.11 | 47.46 | 39.18 | 45.31 | 83.21 | [v1.0](https://huggingface.co/habdine/Esm2Text-Base-v1-0)- [v1.1](https://huggingface.co/habdine/Esm2Text-Base-v1-1) |
The reported results are computed using v1.0
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library, graphein, DSSP, torch and torch geometric with:
```sh
pip install -U transformers
git clone https://github.com/a-r-j/graphein.git
pip install -e graphein/
pip install torch
pip install torch_geometric
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv
sudo apt-get install dssp
sudo ln -s /usr/bin/mkdssp /usr/bin/dssp
```
You might need to install different versions/variants according to your environnement.
Then, copy the snippet from the section that is relevant for your usecase.
#### Running Prot2Text to generate a protein's function using both its structure and sequence
To generate a protein's function using both its structure and amino-acid sequence, you need to load one of Prot2Text models and choose the AlphaFold database ID of the protein.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('habdine/Prot2Text-Base-v1-1',
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('habdine/Prot2Text-Base-v1-1',
trust_remote_code=True)
function = model.generate_protein_description(protein_pdbID='Q10MK9',
tokenizer=tokenizer,
device='cuda' # replace with 'mps' to run on a Mac device
)
print(function)
# 'Carboxylate--CoA ligase that may use 4-coumarate as substrate. Follows a two-step reaction mechanism, wherein the carboxylate substrate first undergoes adenylation by ATP, followed by a thioesterification in the presence of CoA to yield the final CoA thioester.'
```
<br>
#### Running Esm2Text to generate a protein's function using only its sequence
To generate a protein's function using only its amino-acid sequence, you need to load Esm2Text-Base model and pass an amino-acid sequence.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('habdine/Esm2Text-Base-v1-1',
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('habdine/Esm2Text-Base-v1-1',
trust_remote_code=True)
function = model.generate_protein_description(protein_sequence='AEQAERYEEMVEFMEKL',
tokenizer=tokenizer,
device='cuda' # replace with 'mps' to run on a Mac device
)
print(function)
# 'A cytochrome b6-f complex catalyzes the calcium-dependent hydrolysis of the 2-acyl groups in 3-sn-phosphoglycerides. Its physiological function is not known.'
```
<br>
## Notice
THE INFORMATION PROVIDED IS THEORETICAL MODELLING ONLY AND CAUTION SHOULD BE EXERCISED IN ITS USE. IT IS PROVIDED "AS-IS" WITHOUT ANY WARRANTY OF ANY KIND, WHETHER EXPRESSED OR IMPLIED. NO WARRANTY IS GIVEN THAT USE OF THE INFORMATION SHALL NOT INFRINGE THE RIGHTS OF ANY THIRD PARTY. THE INFORMATION IS NOT INTENDED TO BE A SUBSTITUTE FOR PROFESSIONAL MEDICAL ADVICE, DIAGNOSIS, OR TREATMENT, AND DOES NOT CONSTITUTE MEDICAL OR OTHER PROFESSIONAL ADVICE.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.