modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardKhanhWin/ppo-LunarLander-v2
|
RichardKhanhWin
| 2024-01-29T06:49:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-29T06:48:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 234.28 +/- 22.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
imagepipeline/BetterText
|
imagepipeline
| 2024-01-29T06:47:27Z | 0 | 1 | null |
[
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-29T06:47:13Z |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## BetterText
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1b66eef3-51ee-4696-95ba-de204147bf9f/width=450/89E313EC2913FB5F6EED3BCED9ABDFC0737995D5A4394E2FACD281991BFA8977.jpeg" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - Use intended text in quotes, use the word <text> before the text. A panda holding a sign that has the text SAVE ME
[](https://imagepipeline.io/models/BetterText?id=3a0956bd-51ce-48f8-b012-c9b7afb97bb0/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sdxl/text2image/v1/run"
payload = json.dumps({
"model_id": "sdxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "3a0956bd-51ce-48f8-b012-c9b7afb97bb0",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sdxl/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at hello@imagepipeline.io
#### ๐ Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
alisayuploides/model
|
alisayuploides
| 2024-01-29T06:46:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-29T06:43:05Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### model Dreambooth model trained by alisayuploides with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
freud-sensei/headline_classification
|
freud-sensei
| 2024-01-29T06:32:13Z | 19 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T01:45:58Z |
---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dummy-kosts
results: []
widget:
- text: "์ถฉ๊ฒฉ! ์ ๋ช
ํ๋ก์ผ๊ตฌ ์ ์ ์์ฃผ์ด์ ์ ๋ฐ"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dummy-kosts
ํ๊ตญ์ด ๊ธฐ์ฌ ํค๋๋ผ์ธ์ ์
๋ ฅํ๋ฉด 7๊ฐ์ง ์ฃผ์ ์ค ํ๋๋ก ๋ถ๋ฅํด ์ฃผ๋ ๋ชจํ์
๋๋ค.
This model takes Korean article headlines as input and classifies them into one of seven topics.
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on [klue/ynat](https://huggingface.co/datasets/klue/viewer/ynat).
It achieves the following results on the evaluation set:
- Loss: 0.6433
- Accuracy: 0.8642
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4139 | 1.0 | 5710 | 0.4763 | 0.8528 |
| 0.3358 | 2.0 | 11420 | 0.5527 | 0.8650 |
| 0.2045 | 3.0 | 17130 | 0.6433 | 0.8642 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
chathuranga-jayanath/codet5-small-v3
|
chathuranga-jayanath
| 2024-01-29T06:31:54Z | 42 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-26T12:23:22Z |
---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-v3
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1896
- Rouge1: 75.8334
- Rouge2: 66.86
- Rougel: 75.2774
- Rougelsum: 75.2871
- Gen Len: 17.1755
## Model description
Trained,
- dataset: chathuranga-jayanath/context-5-predict-token-for-fine-tune-without-comments-from-finmath
- epochs: 3
- metric: rouge
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0947 | 1.0 | 9734 | 0.1817 | 75.816 | 66.5223 | 75.2217 | 75.2268 | 17.1902 |
| 0.0622 | 2.0 | 19468 | 0.1886 | 75.8598 | 66.7948 | 75.3022 | 75.3025 | 17.1922 |
| 0.0572 | 3.0 | 29202 | 0.1896 | 75.8334 | 66.86 | 75.2774 | 75.2871 | 17.1755 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
SCE/Mistral-7B-summ-lora-tuned
|
SCE
| 2024-01-29T06:25:40Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-29T06:23:01Z |
---
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ThuyNT03/KLTN_CSI_xlm
|
ThuyNT03
| 2024-01-29T06:20:05Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-29T05:37:58Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: KLTN_CSI_xlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_CSI_xlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 369 | 0.1818 |
| 0.3102 | 2.0 | 738 | 0.1137 |
| 0.1924 | 3.0 | 1107 | 0.0781 |
| 0.1924 | 4.0 | 1476 | 0.0495 |
| 0.1345 | 5.0 | 1845 | 0.0299 |
| 0.0815 | 6.0 | 2214 | 0.0810 |
| 0.0574 | 7.0 | 2583 | 0.0148 |
| 0.0574 | 8.0 | 2952 | 0.0090 |
| 0.0324 | 9.0 | 3321 | 0.0096 |
| 0.0131 | 10.0 | 3690 | 0.0086 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
SCE/Mistral-7B-math-ia3-pruned20
|
SCE
| 2024-01-29T06:18:05Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-29T06:06:05Z |
---
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SCE/Mistral-7B-summ-ia3-pruned20
|
SCE
| 2024-01-29T06:14:01Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-29T06:10:16Z |
---
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SCE/Mistral-7B-summ-ia3-pruned10
|
SCE
| 2024-01-29T06:13:53Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-29T06:09:58Z |
---
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SCE/Mistral-7B-summ-ia3-tuned
|
SCE
| 2024-01-29T06:12:55Z | 0 | 0 | null |
[
"safetensors",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-29T06:07:32Z |
---
license: cc-by-nc-4.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T06:11:01Z | 40 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"anakin87/zephyr-7b-alpha-sharded",
"en",
"arxiv:2305.18290",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"license:apache-2.0",
"base_model:MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T06:00:11Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- anakin87/zephyr-7b-alpha-sharded
- en
- arxiv:2305.18290
- license:mit
- autotrain_compatible
- endpoints_compatible
- region:us
- license:apache-2.0
model_name: zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./zephyr-7b-alpha-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
jlbaker361/ft1000-runway-30
|
jlbaker361
| 2024-01-29T05:50:25Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-01-29T03:50:29Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - jlbaker361/ft1000-runway-30
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the jlbaker361/wikiart-balanced1000 dataset.
Training epochs = 1
num_train_timesteps = 30
You can find some example images in the following.




|
Locutusque/Hercules-1.0-Mistral-7B-GGUF
|
Locutusque
| 2024-01-29T05:37:47Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-29T05:33:55Z |
Converted Locutusque/Hercules-1.0-Mistral-7B to GGUF format. Dtype FP16
|
MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T05:34:13Z | 37 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"alexsherstinsky/Mistral-7B-v0.1-sharded",
"pytorch",
"pretrained",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T05:23:32Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- alexsherstinsky/Mistral-7B-v0.1-sharded
- pytorch
- pretrained
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Mistral-7B-v0.1-sharded-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
budecosystem/code-millenials-3b
|
budecosystem
| 2024-01-29T05:26:40Z | 97 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"code",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-14T06:58:09Z |
---
library_name: transformers
tags:
- code
---
# Bud Code Millenials 3B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to jithinvg@bud.studio
### News ๐ฅ๐ฅ๐ฅ
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result-3b.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### ๐ Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-3b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-3b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction} ### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 8 A100 80GB for approximately 6hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 3 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 26289 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.15 |
| optimizer | adamw |
| fp16 | True |
| GPU | 8 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
Luna-Skywalker/corn_dtect
|
Luna-Skywalker
| 2024-01-29T05:14:01Z | 0 | 0 |
fastai
|
[
"fastai",
"pytorch",
"timm",
"image-classification",
"arxiv:2201.03545",
"license:mit",
"region:us"
] |
image-classification
| 2024-01-29T05:04:57Z |
---
tags:
- fastai
- pytorch
- timm
license: mit
metrics:
- accuracy
library_name: fastai
pipeline_tag: image-classification
---
# Potato disease detection
This model is trained for detecting if the corn plant is healthy or if it has diseases like Common Rust, Gray Leaf Spot or Northern Leaf Blight
# Enjoyy
---
# Model card
A ConvNeXT model used for image classfication.
## Model Details
- **Model Type:** Image classification
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Dataset:** New Bangladeshi Crop Disease: https://www.kaggle.com/datasets/nafishamoin/new-bangladeshi-crop-disease
- **Original:** https://github.com/facebookresearch/ConvNeXt
|
Chenxi-Chelsea-Liu/whisper-small-noisy-hindi-10dB
|
Chenxi-Chelsea-Liu
| 2024-01-29T05:12:29Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-28T07:10:26Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-noisy-hindi-10dB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-noisy-hindi-10dB
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7442
- Wer: 41.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 1.6146 | 0.61 | 50 | 1.3244 | 85.2585 |
| 0.8209 | 1.22 | 100 | 0.7607 | 55.4556 |
| 0.6434 | 1.83 | 150 | 0.6184 | 51.5822 |
| 0.5053 | 2.44 | 200 | 0.5191 | 46.7404 |
| 0.409 | 3.05 | 250 | 0.4271 | 41.9938 |
| 0.265 | 3.66 | 300 | 0.3151 | 39.4778 |
| 0.1786 | 4.27 | 350 | 0.2965 | 37.3076 |
| 0.1617 | 4.88 | 400 | 0.2826 | 36.2355 |
| 0.103 | 5.49 | 450 | 0.2877 | 35.5957 |
| 0.0907 | 6.1 | 500 | 0.2929 | 35.3450 |
| 0.0595 | 6.71 | 550 | 0.3032 | 34.8262 |
| 0.0338 | 7.32 | 600 | 0.3186 | 34.7743 |
| 0.0365 | 7.93 | 650 | 0.3303 | 34.3853 |
| 0.021 | 8.54 | 700 | 0.3414 | 34.3420 |
| 0.0174 | 9.15 | 750 | 0.3561 | 34.1605 |
| 0.0129 | 9.76 | 800 | 0.3619 | 34.3247 |
| 0.009 | 10.37 | 850 | 0.3681 | 33.9703 |
| 0.0082 | 10.98 | 900 | 0.3802 | 34.2469 |
| 0.006 | 11.59 | 950 | 0.3817 | 33.4083 |
| 0.0052 | 12.2 | 1000 | 0.4054 | 34.4112 |
| 0.005 | 12.8 | 1050 | 0.4113 | 34.2123 |
| 0.0041 | 13.41 | 1100 | 0.4139 | 33.8060 |
| 0.0043 | 14.02 | 1150 | 0.4161 | 32.9500 |
| 0.0028 | 14.63 | 1200 | 0.4284 | 33.0192 |
| 0.0027 | 15.24 | 1250 | 0.4349 | 33.1229 |
| 0.0027 | 15.85 | 1300 | 0.4253 | 32.7598 |
| 0.0022 | 16.46 | 1350 | 0.4419 | 33.1143 |
| 0.0023 | 17.07 | 1400 | 0.4453 | 32.9154 |
| 0.002 | 17.68 | 1450 | 0.4457 | 32.5696 |
| 0.0014 | 18.29 | 1500 | 0.4592 | 32.8809 |
| 0.0014 | 18.9 | 1550 | 0.4757 | 32.8290 |
| 0.001 | 19.51 | 1600 | 0.4767 | 33.4169 |
| 0.0008 | 20.12 | 1650 | 0.4876 | 32.4831 |
| 0.0008 | 20.73 | 1700 | 0.4905 | 32.9760 |
| 0.0011 | 21.34 | 1750 | 0.4876 | 32.7252 |
| 0.0007 | 21.95 | 1800 | 0.4992 | 33.0105 |
| 0.0003 | 22.56 | 1850 | 0.5190 | 32.3102 |
| 0.0007 | 23.17 | 1900 | 0.5240 | 32.6734 |
| 0.0005 | 23.78 | 1950 | 0.5315 | 32.8809 |
| 0.0003 | 24.39 | 2000 | 0.5333 | 32.7771 |
| 0.0002 | 25.0 | 2050 | 0.5441 | 32.1200 |
| 0.0001 | 25.61 | 2100 | 0.5626 | 32.4313 |
| 0.0001 | 26.22 | 2150 | 0.5690 | 32.1546 |
| 0.0001 | 26.83 | 2200 | 0.5861 | 32.1978 |
| 0.0001 | 27.44 | 2250 | 0.6071 | 32.0163 |
| 0.0 | 28.05 | 2300 | 0.6214 | 32.6388 |
| 0.0001 | 28.66 | 2350 | 0.6333 | 32.7512 |
| 0.0 | 29.27 | 2400 | 0.6525 | 32.5782 |
| 0.0 | 29.88 | 2450 | 0.6627 | 32.6647 |
| 0.0 | 30.49 | 2500 | 0.6759 | 32.5523 |
| 0.0 | 31.1 | 2550 | 0.6960 | 33.3737 |
| 0.0 | 31.71 | 2600 | 0.7087 | 34.1864 |
| 0.0 | 32.32 | 2650 | 0.7228 | 34.4544 |
| 0.0 | 32.93 | 2700 | 0.7274 | 35.1634 |
| 0.0 | 33.54 | 2750 | 0.7327 | 35.7254 |
| 0.0 | 34.15 | 2800 | 0.7369 | 37.0569 |
| 0.0 | 34.76 | 2850 | 0.7405 | 38.2155 |
| 0.0 | 35.37 | 2900 | 0.7433 | 40.8871 |
| 0.0 | 35.98 | 2950 | 0.7441 | 41.6739 |
| 0.0 | 36.59 | 3000 | 0.7442 | 41.8554 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 1.12.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Luna-Skywalker/wheat_dtect
|
Luna-Skywalker
| 2024-01-29T05:09:19Z | 0 | 2 |
fastai
|
[
"fastai",
"pytorch",
"timm",
"image-classification",
"arxiv:2201.03545",
"license:mit",
"region:us"
] |
image-classification
| 2024-01-29T05:08:06Z |
---
tags:
- fastai
- pytorch
- timm
license: mit
metrics:
- accuracy
library_name: fastai
pipeline_tag: image-classification
---
# Potato disease detection
This model is trained for detecting if the wheat plant is healthy or if it has diseases like Yellow Rust or Brown Rust
# Enjoyy
---
# Model card
A ConvNeXT model used for image classfication.
## Model Details
- **Model Type:** Image classification
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Dataset:** New Bangladeshi Crop Disease: https://www.kaggle.com/datasets/nafishamoin/new-bangladeshi-crop-disease
- **Original:** https://github.com/facebookresearch/ConvNeXt
|
vseetharamamugeraya/panda-artist
|
vseetharamamugeraya
| 2024-01-29T05:06:44Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-29T04:59:41Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Panda-artist Dreambooth model trained by vseetharamamugeraya following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: C52
Sample pictures of this concept:



|
Vinod-IE/llama-2-7b-hf-small-far
|
Vinod-IE
| 2024-01-29T05:04:12Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T05:03:59Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
jingyeom/KoSoLAR-10.7B-v0.2_1.3_dedup_p
|
jingyeom
| 2024-01-29T05:01:06Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T13:01:15Z |
---
license: apache-2.0
---
## Model
base_model : yanolja/KoSOLAR-10.7B-v0.2
## Dataset
* ๊ณต๊ฐ ๋ฐ์ดํฐ ์์ง
* Deduplicating Training Data Makes Language Models Better ์๊ณ ๋ฆฌ์ฆ ํ์ฉ
## Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jingyeom/KoSoLAR-10.7B-v0.2_1.3_dedup"
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Benchmark
**[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
|
MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T04:55:34Z | 38 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"cookinai/CatMacaroni-Slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T04:44:58Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- cookinai/CatMacaroni-Slerp
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./CatMacaroni-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
LoneStriker/CodeMate-v0.1-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-29T04:52:17Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"CodeMate",
"Code",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T04:41:22Z |
---
license: llama2
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- CodeMate
- Code
---
# **CodeMate-v0.1**
CodeMate-v0.1 is an intelligent programming assistant developed by [CodeMate](https://codemate.ai).
This model aims to assist users in generating high-quality code solutions for programming problems.
Please note that this model is currently in version 0.1.
## Model Details
- **Training Data:** Exclusively fine-tuned on a proprietary dataset of 1.8 billion tokens of high-quality programming problems and solutions.
- The dataset was generated manually and is internal to CodeMate.
- **Training Techniques:** The model was fine-tuned using Flash Attention 2, trained over 15 hours on 40 A100-80GB GPUs.
- A sequence length of 8096 tokens was used during training.
- **Multilingual Support:** CodeMate-v0.1 is proficient in multiple programming languages, including Python, C/C++, TypeScript, Java, and more.
## How to Get Started with the Model
Make sure to install Transformers from the main git branch:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
## How to Prompt the Model
This model accepts prompts in the Alpaca/Vicuna instruction format. For example:
```markdown
### System Prompt
You are an intelligent programming assistant.
### User Message
Implement a linked list in C++
### Assistant
...
```
## Load the Model:
To load the model, utilize the following Python script:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Initialize the model
model_path = "codemateai/CodeMate-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)
# ... generate response ...
```
## Bias, Risks, and Limitations
This model has undergone very limited testing. CodeMate recommends additional safety testing before any real-world deployments.
For more information and updates, visit the [CodeMate website](https://codemate.ai).
|
Nicolas852/Reinforce-cartpole-v1
|
Nicolas852
| 2024-01-29T04:42:05Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-29T04:41:56Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T04:36:50Z | 47 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"jondurbin/bagel-7b-v0.1",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T04:25:58Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- jondurbin/bagel-7b-v0.1
- dataset:ai2_arc
- dataset:unalignment/spicy-3.1
- dataset:codeparrot/apps
- dataset:facebook/belebele
- dataset:boolq
- dataset:jondurbin/cinematika-v0.1
- dataset:drop
- dataset:lmsys/lmsys-chat-1m
- dataset:TIGER-Lab/MathInstruct
- dataset:cais/mmlu
- dataset:Muennighoff/natural-instructions
- dataset:openbookqa
- dataset:piqa
- dataset:Vezora/Tested-22k-Python-Alpaca
- dataset:cakiki/rosetta-code
- dataset:Open-Orca/SlimOrca
- dataset:spider
- dataset:squad_v2
- dataset:migtissera/Synthia-v1.3
- dataset:datasets/winogrande
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./bagel-7b-v0.1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
reecursion/xlm-roberta-base-inspiration
|
reecursion
| 2024-01-29T04:32:58Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-26T11:28:18Z |
---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-inspiration
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-inspiration
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6599
- Accuracy: 0.8697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6109 | 1.0 | 1237 | 0.3765 | 0.8406 |
| 0.3053 | 2.0 | 2474 | 0.3668 | 0.8503 |
| 0.2276 | 3.0 | 3711 | 0.5105 | 0.8673 |
| 0.2864 | 4.0 | 4948 | 0.5916 | 0.8608 |
| 0.0791 | 5.0 | 6185 | 0.6599 | 0.8697 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jd2g/sd-class-butterflies-32
|
jd2g
| 2024-01-29T03:55:24Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-01-29T03:55:06Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jd2g/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
backnotprop/np_cr_model3
|
backnotprop
| 2024-01-29T03:53:22Z | 13 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-28T21:56:52Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'spiral wave flower by <s0><s1>,minimalism,abstract,photoshop generated abstract colorful object mesh'
output:
url:
"image_0.png"
- text: 'spiral wave flower by <s0><s1>,minimalism,abstract,photoshop generated abstract colorful object mesh'
output:
url:
"image_1.png"
- text: 'spiral wave flower by <s0><s1>,minimalism,abstract,photoshop generated abstract colorful object mesh'
output:
url:
"image_2.png"
- text: 'spiral wave flower by <s0><s1>,minimalism,abstract,photoshop generated abstract colorful object mesh'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: something by <s0><s1>,minimalism,abstract,photoshop generated abstract colorful object mesh
license: openrail++
---
# SDXL LoRA DreamBooth - backnotprop/np_cr_model3
<Gallery />
## Model description
### These are backnotprop/np_cr_model3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`np_cr_model3.safetensors` here ๐พ](/backnotprop/np_cr_model3/blob/main/np_cr_model3.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:np_cr_model3:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`np_cr_model3_emb.safetensors` here ๐พ](/backnotprop/np_cr_model3/blob/main/np_cr_model3_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `np_cr_model3_emb` to your prompt. For example, `something by np_cr_model3_emb,minimalism,abstract,photoshop generated abstract colorful object mesh`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('backnotprop/np_cr_model3', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='backnotprop/np_cr_model3', filename='np_cr_model3_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('spiral wave flower by <s0><s1>,minimalism,abstract,photoshop generated abstract colorful object mesh').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Details
All [Files & versions](/backnotprop/np_cr_model3/tree/main).
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
happyxujin/ppo-Huggy
|
happyxujin
| 2024-01-29T03:48:26Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-29T03:48:07Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: happyxujin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL
|
mlx-community
| 2024-01-29T03:45:29Z | 34 | 1 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T07:24:39Z |
---
license: mit
---
## [mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL](https://huggingface.co/mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL)
ๆฌๆฌกๅพฎ่ฐ็ๆจกๅๆๅทฒ็ปไธไผ ๅฐไบ HuggingFace Hub ไธ๏ผๅคงๅฎถๅฏไปฅ่ฟ่กๅฐ่ฏใ
### ๅฎ่ฃ
mlx-lm
```bash
pip install mlx-lm
```
### ็ๆ SQL
```
python -m mlx_lm.generate --model mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: Which school did Wang Junjian come from?
A: "
```
```
SELECT School FROM Students WHERE Name = 'Wang Junjian'
```
## [ๅจ MLX ไธไฝฟ็จ LoRA ๅบไบ Mistral-7B ๅพฎ่ฐ Text2SQL๏ผไธ๏ผ](https://wangjunjian.com/mlx/lora/2024/01/23/Fine-tuning-Text2SQL-based-on-Mistral-7B-using-LoRA-on-MLX-1.html)
๐ ๆฒกๆไฝฟ็จๆจกๅ็ๆ ๆณจๆ ผๅผ็ๆๆฐๆฎ้๏ผๅฏผ่ดไธ่ฝ็ปๆ๏ผ็ดๅฐ็ๆๆๅคง็ Tokens ๆฐ้ใ
่ฟๆฌกๆไปฌๆฅ่งฃๅณ่ฟไธช้ฎ้ขใ
## ๆฐๆฎ้ WikiSQL
- [WikiSQL](https://github.com/salesforce/WikiSQL)
- [sqllama/sqllama-V0](https://huggingface.co/sqllama/sqllama-V0/blob/main/wikisql.ipynb)
### ไฟฎๆน่ๆฌ mlx-examples/lora/data/wikisql.py
```py
if __name__ == "__main__":
# ......
for dataset, name, size in datasets:
with open(f"data/{name}.jsonl", "w") as fid:
for e, t in zip(range(size), dataset):
"""
t ๅ้็ๆๆฌๆฏ่ฟๆ ท็๏ผ
------------------------
<s>table: 1-1058787-1
columns: Approximate Age, Virtues, Psycho Social Crisis, Significant Relationship, Existential Question [ not in citation given ], Examples
Q: How many significant relationships list Will as a virtue?
A: SELECT COUNT Significant Relationship FROM 1-1058787-1 WHERE Virtues = 'Will'</s>
"""
t = t[3:] # ๅปๆๅผๅคด็ <s>๏ผๅ ไธบ tokenizer ไผ่ชๅจๆทปๅ <s>
json.dump({"text": t}, fid)
fid.write("\n")
```
ๆง่ก่ๆฌ `data/wikisql.py` ็ๆๆฐๆฎ้ใ
### ๆ ทๆฌ็คบไพ
```
table: 1-10753917-1
columns: Season, Driver, Team, Engine, Poles, Wins, Podiums, Points, Margin of defeat
Q: Which podiums did the alfa romeo team have?
A: SELECT Podiums FROM 1-10753917-1 WHERE Team = 'Alfa Romeo'</s>
```
## ๅพฎ่ฐ
- ้ข่ฎญ็ปๆจกๅ [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### LoRA ๅพฎ่ฐ
```bash
python lora.py --model mistralai/Mistral-7B-v0.1 \
--train \
--iters 600
```
```
Total parameters 7243.436M
Trainable parameters 1.704M
python lora.py --model mistralai/Mistral-7B-v0.1 --train --iters 600 50.58s user 214.71s system 21% cpu 20:26.04 total
```
ๅพฎ่ฐไธๅไน 2.35 ๏ผ1.704M / 7243.436M * 10000๏ผ็ๆจกๅๅๆฐใ
LoRA ๅพฎ่ฐ 600 ๆฌก่ฟญไปฃ๏ผ่ๆถ 20 ๅ 26 ็ง๏ผๅ ็จๅ
ๅญ 46Gใ
## ่ฏไผฐ
่ฎก็ฎๆต่ฏ้ๅฐๆๅบฆ๏ผPPL๏ผๅไบคๅ็ตๆๅคฑ๏ผLoss๏ผใ
```bash
python lora.py --model mistralai/Mistral-7B-v0.1 \
--adapter-file adapters.npz \
--test
```
```
Iter 100: Test loss 1.351, Test ppl 3.862.
Iter 200: Test loss 1.327, Test ppl 3.770.
Iter 300: Test loss 1.353, Test ppl 3.869.
Iter 400: Test loss 1.355, Test ppl 3.875.
Iter 500: Test loss 1.294, Test ppl 3.646.
Iter 600: Test loss 1.351, Test ppl 3.863.
```
| Iter | Test loss | Test ppl |
| :--: | --------: | -------: |
| 100 | 1.351 | 3.862 |
| 200 | 1.327 | 3.770 |
| 300 | 1.353 | 3.869 |
| 400 | 1.355 | 3.875 |
| 500 | 1.294 | 3.646 |
| 600 | 1.351 | 3.863 |
่ฏไผฐๅ ็จๅ
ๅญ 26Gใ
## ่ๅ๏ผFuse๏ผ
```bash
python fuse.py --model mistralai/Mistral-7B-v0.1 \
--adapter-file adapters.npz \
--save-path lora_fused_model
```
## ็ๆ SQL
### ็ๅๅปบ็ๅงๅๆฏไปไน๏ผ
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: What is Wang Junjian's name?
A: "
```
```
SELECT Name FROM students WHERE Name = 'Wang Junjian'
```
### ็ๅๅปบ็ๅนด้พๆฏๅคๅฐ๏ผ
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: How old is Wang Junjian?
A: "
```
```
SELECT Age FROM Students WHERE Name = 'Wang Junjian'
```
### ็ๅๅปบๆฅ่ชๅชๆๅญฆๆ ก๏ผ
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: Which school did Wang Junjian come from?
A: "
```
```
SELECT School FROM Students WHERE Name = 'Wang Junjian'
```
### ๆฅ่ฏข็ๅๅปบ็ๅงๅใๅนด้พใๅญฆๆ กไฟกๆฏใ
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: Query Wang Junjianโs name, age, and school information.
A: "
```
```
SELECT Name, Age, School FROM Students WHERE Name = 'Wang Junjian'
```
### ๆฅ่ฏข็ๅๅปบ็ๆๆไฟกๆฏใ
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: Query all information about Wang Junjian.
A: "
```
```
SELECT Name FROM students WHERE Name = 'Wang Junjian'
```
ๅฏ่ฝ่ฎญ็ปๆฐๆฎไธ่ถณใ
### ็ป่ฎกไธไธไนๅนด็บงๆๅคๅฐๅญฆ็ใ
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: Count how many students there are in ninth grade.
A: "
```
```
SELECT COUNT Name FROM Students WHERE Grade = '9th'
```
### ็ป่ฎกไธไธไนๅนด็บงๆๅคๅฐๅญฆ็๏ผไนๅนด็บง็ๅผๆฏ9๏ผใ
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
The value for ninth grade is 9.
Q: Count how many students there are in ninth grade.
A: "
```
```bash
python -m mlx_lm.generate --model lora_fused_model \
--max-tokens 50 \
--prompt "table: students
columns: Name, Age, School, Grade, Height, Weight
Q: Count how many students there are in ninth grade.๏ผThe value for ninth grade is 9.๏ผ
A: "
```
```
SELECT COUNT Name FROM students WHERE Grade = 9
```
้ๅ ็ๆ็คบไฟกๆฏๅฏไปฅ่ฝปๆพๆทปๅ ๏ผไธ็จๅคชๅจๆๆพ็ฝฎ็ไฝ็ฝฎใ
## ไธไผ ๆจกๅๅฐ HuggingFace Hub
1. ๅ ๅ
ฅ [MLX Community](https://huggingface.co/mlx-community) ็ป็ป
2. ๅจ MLX Community ็ป็ปไธญๅๅปบไธไธชๆฐ็ๆจกๅ [mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL](https://huggingface.co/mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL)
3. ๅ
้ไปๅบ [mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL](https://huggingface.co/mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL)
```bash
git clone https://huggingface.co/mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL
```
4. ๅฐ็ๆ็ๆจกๅๆไปถ๏ผ`lora_fused_model` ็ฎๅฝไธ็ๆๆๆไปถ๏ผๅคๅถๅฐไปๅบ็ฎๅฝไธ
5. ไธไผ ๆจกๅๅฐ HuggingFace Hub
```bash
git add .
git commit -m "Fine tuning Text2SQL based on Mistral-7B using LoRA on MLX"
git push
```
### git push ้่ฏฏ
1. ไธ่ฝ push
้่ฏฏไฟกๆฏ๏ผ
```
Uploading LFS objects: 0% (0/2), 0 B | 0 B/s, done.
batch response: Authorization error.
error: failed to push some refs to 'https://huggingface.co/mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL'
```
่งฃๅณๆนๆณ๏ผ
```bash
vim .git/config
```
```conf
[remote "origin"]
url = https://wangjunjian:write_token@huggingface.co/mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL
fetch = +refs/heads/*:refs/remotes/origin/*
```
2. ไธ่ฝไธไผ ๅคงไบ 5GB ็ๆไปถ
้่ฏฏไฟกๆฏ๏ผ
```
warning: current Git remote contains credentials
batch response:
You need to configure your repository to enable upload of files > 5GB.
Run "huggingface-cli lfs-enable-largefiles ./path/to/your/repo" and try again.
```
่งฃๅณๆนๆณ๏ผ
```bash
huggingface-cli longin
huggingface-cli lfs-enable-largefiles /Users/junjian/HuggingFace/mlx-community/Mistral-7B-v0.1-LoRA-Text2SQL
```
## ๅ่่ตๆ
- [MLX Community](https://huggingface.co/mlx-community)
- [Fine-Tuning with LoRA or QLoRA](https://github.com/ml-explore/mlx-examples/tree/main/lora)
- [Generate Text with LLMs and MLX](https://github.com/ml-explore/mlx-examples/tree/main/llms)
- [Awesome Text2SQL](https://github.com/eosphoros-ai/Awesome-Text2SQL)
- [Awesome Text2SQL๏ผไธญๆ๏ผ](https://github.com/eosphoros-ai/Awesome-Text2SQL/blob/main/README.zh.md)
- [Mistral AI](https://huggingface.co/mistralai)
- [A Beginnerโs Guide to Fine-Tuning Mistral 7B Instruct Model](https://adithyask.medium.com/a-beginners-guide-to-fine-tuning-mistral-7b-instruct-model-0f39647b20fe)
- [Mistral Instruct 7B Finetuning on MedMCQA Dataset](https://saankhya.medium.com/mistral-instruct-7b-finetuning-on-medmcqa-dataset-6ec2532b1ff1)
- [Fine-tuning Mistral on your own data](https://github.com/brevdev/notebooks/blob/main/mistral-finetune-own-data.ipynb)
- [mlx-examples llms Mistral](https://github.com/ml-explore/mlx-examples/blob/main/llms/mistral/README.md)
|
Sihead/Mixtral_Alpace_v2
|
Sihead
| 2024-01-29T03:40:10Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-22T02:03:38Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: Mixtral_Alpace_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral_Alpace_v2
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 8
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T03:39:26Z | 57 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"Toten5/Marcoroni-neural-chat-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T03:28:45Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- Toten5/Marcoroni-neural-chat-7B-v2
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Marcoroni-neural-chat-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
asun17904/anliR3-gpt2
|
asun17904
| 2024-01-29T03:34:00Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-01-28T16:13:32Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Dataset: ANLI
Round: None
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 1.0
- `lambda` = 0.0
- `alpha` = 1.0
- `beta` = 1.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|1.102|0.429|1.0|
|1.076|0.451|2.0|
|1.092|0.443|3.0|
|1.092|0.441|4.0|
|1.098|0.438|5.0|
|1.104|0.437|6.0|
|1.094|0.443|7.0|
|1.095|0.448|8.0|
|1.110|0.434|9.0|
|
jlbaker361/dcgan-gpu-wikiart25-clip
|
jlbaker361
| 2024-01-29T03:29:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-29T02:55:17Z |
---
{}
---
Creative Adversarial Network
epochs: 5
dataset jlbaker361/wikiart-balanced25
n classes 27
batch_size 4
images where resized to 768
and then center cropped to: 512
used clip=True
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
liwii/electra-based-ja-fc-classifier
|
liwii
| 2024-01-29T03:29:11Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"generated_from_trainer",
"base_model:megagonlabs/transformers-ud-japanese-electra-base-discriminator",
"base_model:finetune:megagonlabs/transformers-ud-japanese-electra-base-discriminator",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T01:59:11Z |
---
license: mit
base_model: megagonlabs/transformers-ud-japanese-electra-base-discriminator
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: electra-based-ja-fc-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-based-ja-fc-classifier
This model is a fine-tuned version of [megagonlabs/transformers-ud-japanese-electra-base-discriminator](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1861
- Accuracy: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.38340974405913e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3593 | 1.0 | 1223 | 0.2104 | 0.9160 |
| 0.2137 | 2.0 | 2446 | 0.1861 | 0.9414 |
| 0.1187 | 3.0 | 3669 | 0.2379 | 0.9355 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Jeyong/Synatra-10.7B-v0.4-awq
|
Jeyong
| 2024-01-29T03:28:55Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-29T02:46:24Z |
## Base Mode
[maywell/Synatra-10.7B-v0.4](https://huggingface.co/maywell/Synatra-10.7B-v0.4)
|
MaddieMo/ChatGLM3-6B-Test
|
MaddieMo
| 2024-01-29T03:28:04Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"arxiv:2103.10360",
"arxiv:2210.02414",
"endpoints_compatible",
"region:us"
] | null | 2024-01-27T03:36:23Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM3-6B
<p align="center">
๐ป <a href="https://github.com/THUDM/ChatGLM" target="_blank">Github Repo</a> โข ๐ฆ <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> โข ๐ <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> โข ๐ <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
๐ Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-25ti5uohv-A_hs~am_D3Q8XPZMpj7wwQ" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
<p align="center">
๐Experience the larger-scale ChatGLM model at <a href="https://www.chatglm.cn">chatglm.cn</a>
</p>
## ไป็ป (Introduction)
ChatGLM3-6B ๆฏ ChatGLM ็ณปๅๆๆฐไธไปฃ็ๅผๆบๆจกๅ๏ผๅจไฟ็ไบๅไธคไปฃๆจกๅๅฏน่ฏๆต็
ใ้จ็ฝฒ้จๆงไฝ็ญไผๅคไผ็ง็นๆง็ๅบ็กไธ๏ผChatGLM3-6B ๅผๅ
ฅไบๅฆไธ็นๆง๏ผ
1. **ๆดๅผบๅคง็ๅบ็กๆจกๅ๏ผ** ChatGLM3-6B ็ๅบ็กๆจกๅ ChatGLM3-6B-Base ้็จไบๆดๅคๆ ท็่ฎญ็ปๆฐๆฎใๆดๅ
ๅ็่ฎญ็ปๆญฅๆฐๅๆดๅ็็่ฎญ็ป็ญ็ฅใๅจ่ฏญไนใๆฐๅญฆใๆจ็ใไปฃ็ ใ็ฅ่ฏ็ญไธๅ่งๅบฆ็ๆฐๆฎ้ไธๆต่ฏๆพ็คบ๏ผChatGLM3-6B-Base ๅ
ทๆๅจ 10B ไปฅไธ็้ข่ฎญ็ปๆจกๅไธญๆๅผบ็ๆง่ฝใ
2. **ๆดๅฎๆด็ๅ่ฝๆฏๆ๏ผ** ChatGLM3-6B ้็จไบๅ
จๆฐ่ฎพ่ฎก็ [Prompt ๆ ผๅผ](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT.md)๏ผ้คๆญฃๅธธ็ๅค่ฝฎๅฏน่ฏๅคใๅๆถๅ็ๆฏๆ[ๅทฅๅ
ท่ฐ็จ](https://github.com/THUDM/ChatGLM3/blob/main/tool_using/README.md)๏ผFunction Call๏ผใไปฃ็ ๆง่ก๏ผCode Interpreter๏ผๅ Agent ไปปๅก็ญๅคๆๅบๆฏใ
3. **ๆดๅ
จ้ข็ๅผๆบๅบๅ๏ผ** ้คไบๅฏน่ฏๆจกๅ ChatGLM3-6B ๅค๏ผ่ฟๅผๆบไบๅบ็กๆจกๅ ChatGLM-6B-Baseใ้ฟๆๆฌๅฏน่ฏๆจกๅ ChatGLM3-6B-32Kใไปฅไธๆๆๆ้ๅฏนๅญฆๆฏ็ ็ฉถ**ๅฎๅ
จๅผๆพ**๏ผๅจๅกซๅ[้ฎๅท](https://open.bigmodel.cn/mla/form)่ฟ่ก็ป่ฎฐๅ**ไบฆๅ
่ฎธๅ
่ดนๅไธไฝฟ็จ**ใ
ChatGLM3-6B is the latest open-source model in the ChatGLM series. While retaining many excellent features such as smooth dialogue and low deployment threshold from the previous two generations, ChatGLM3-6B introduces the following features:
1. **More Powerful Base Model:** The base model of ChatGLM3-6B, ChatGLM3-6B-Base, employs a more diverse training dataset, more sufficient training steps, and a more reasonable training strategy. Evaluations on datasets such as semantics, mathematics, reasoning, code, knowledge, etc., show that ChatGLM3-6B-Base has the strongest performance among pre-trained models under 10B.
2. **More Comprehensive Function Support:** ChatGLM3-6B adopts a newly designed [Prompt format](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT_en.md), in addition to the normal multi-turn dialogue. It also natively supports [function call](https://github.com/THUDM/ChatGLM3/blob/main/tool_using/README_en.md), code interpreter, and complex scenarios such as agent tasks.
3. **More Comprehensive Open-source Series:** In addition to the dialogue model ChatGLM3-6B, the base model ChatGLM-6B-Base and the long-text dialogue model ChatGLM3-6B-32K are also open-sourced. All the weights are **fully open** for academic research, and after completing the [questionnaire](https://open.bigmodel.cn/mla/form) registration, they are also **allowed for free commercial use**.
## ่ฝฏไปถไพ่ต (Dependencies)
```shell
pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate
```
## ไปฃ็ ่ฐ็จ (Code Usage)
ๅฏไปฅ้่ฟๅฆไธไปฃ็ ่ฐ็จ ChatGLM3-6B ๆจกๅๆฅ็ๆๅฏน่ฏ๏ผ
You can generate dialogue by invoking the ChatGLM3-6B model with the following code:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/chatglm3-6b", trust_remote_code=True).half().cuda()
>>> model = model.eval()
>>> response, history = model.chat(tokenizer, "ไฝ ๅฅฝ", history=[])
>>> print(response)
ไฝ ๅฅฝ๐!ๆๆฏไบบๅทฅๆบ่ฝๅฉๆ ChatGLM-6B,ๅพ้ซๅ
ด่งๅฐไฝ ,ๆฌข่ฟ้ฎๆไปปไฝ้ฎ้ขใ
>>> response, history = model.chat(tokenizer, "ๆไธ็กไธ็ๅบ่ฏฅๆไนๅ", history=history)
>>> print(response)
ๆไธ็กไธ็ๅฏ่ฝไผ่ฎฉไฝ ๆๅฐ็ฆ่ๆไธ่ๆ,ไฝไปฅไธๆฏไธไบๅฏไปฅๅธฎๅฉไฝ ๅ
ฅ็ก็ๆนๆณ:
1. ๅถๅฎ่งๅพ็็ก็ ๆถ้ด่กจ:ไฟๆ่งๅพ็็ก็ ๆถ้ด่กจๅฏไปฅๅธฎๅฉไฝ ๅปบ็ซๅฅๅบท็็ก็ ไน ๆฏ,ไฝฟไฝ ๆดๅฎนๆๅ
ฅ็กใๅฐฝ้ๅจๆฏๅคฉ็็ธๅๆถ้ดไธๅบ,ๅนถๅจๅไธๆถ้ด่ตทๅบใ
2. ๅ้ ไธไธช่้็็ก็ ็ฏๅข:็กฎไฟ็ก็ ็ฏๅข่้,ๅฎ้,้ปๆไธๆธฉๅบฆ้ๅฎใๅฏไปฅไฝฟ็จ่้็ๅบไธ็จๅ,ๅนถไฟๆๆฟ้ด้้ฃใ
3. ๆพๆพ่บซๅฟ:ๅจ็กๅๅไบๆพๆพ็ๆดปๅจ,ไพๅฆๆณกไธช็ญๆฐดๆพก,ๅฌไบ่ฝปๆ็้ณไน,้
่ฏปไธไบๆ่ถฃ็ไนฆ็ฑ็ญ,ๆๅฉไบ็ผ่งฃ็ดงๅผ ๅ็ฆ่,ไฝฟไฝ ๆดๅฎนๆๅ
ฅ็กใ
4. ้ฟๅ
้ฅฎ็จๅซๆๅๅกๅ ็้ฅฎๆ:ๅๅกๅ ๆฏไธ็งๅบๆฟๆง็ฉ่ดจ,ไผๅฝฑๅไฝ ็็ก็ ่ดจ้ใๅฐฝ้้ฟๅ
ๅจ็กๅ้ฅฎ็จๅซๆๅๅกๅ ็้ฅฎๆ,ไพๅฆๅๅก,่ถๅๅฏไนใ
5. ้ฟๅ
ๅจๅบไธๅไธ็ก็ ๆ ๅ
ณ็ไบๆ
:ๅจๅบไธๅไบไธ็ก็ ๆ ๅ
ณ็ไบๆ
,ไพๅฆ็็ตๅฝฑ,็ฉๆธธๆๆๅทฅไฝ็ญ,ๅฏ่ฝไผๅนฒๆฐไฝ ็็ก็ ใ
6. ๅฐ่ฏๅผๅธๆๅทง:ๆทฑๅผๅธๆฏไธ็งๆพๆพๆๅทง,ๅฏไปฅๅธฎๅฉไฝ ็ผ่งฃ็ดงๅผ ๅ็ฆ่,ไฝฟไฝ ๆดๅฎนๆๅ
ฅ็กใ่ฏ็ๆ
ขๆ
ขๅธๆฐ,ไฟๆๅ ็ง้,็ถๅ็ผๆ
ขๅผๆฐใ
ๅฆๆ่ฟไบๆนๆณๆ ๆณๅธฎๅฉไฝ ๅ
ฅ็ก,ไฝ ๅฏไปฅ่่ๅจ่ฏขๅป็ๆ็ก็ ไธๅฎถ,ๅฏปๆฑ่ฟไธๆญฅ็ๅปบ่ฎฎใ
```
ๅ
ณไบๆดๅค็ไฝฟ็จ่ฏดๆ๏ผๅ
ๆฌๅฆไฝ่ฟ่กๅฝไปค่กๅ็ฝ้กต็ๆฌ็ DEMO๏ผไปฅๅไฝฟ็จๆจกๅ้ๅไปฅ่็ๆพๅญ๏ผ่ฏทๅ่ๆไปฌ็ [Github Repo](https://github.com/THUDM/ChatGLM)ใ
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM).
## ๅ่ฎฎ (License)
ๆฌไปๅบ็ไปฃ็ ไพ็
ง [Apache-2.0](LICENSE) ๅ่ฎฎๅผๆบ๏ผChatGLM3-6B ๆจกๅ็ๆ้็ไฝฟ็จๅ้่ฆ้ตๅพช [Model License](MODEL_LICENSE)ใ
The code in this repository is open-sourced under the [Apache-2.0 license](LICENSE), while the use of the ChatGLM3-6B model weights needs to comply with the [Model License](MODEL_LICENSE).
## ๅผ็จ (Citation)
ๅฆๆไฝ ่งๅพๆไปฌ็ๅทฅไฝๆๅธฎๅฉ็่ฏ๏ผ่ฏท่่ๅผ็จไธๅ่ฎบๆใ
If you find our work helpful, please consider citing the following papers.
```
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
```
|
MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T03:20:36Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"Minirecord/Mini_synata_7b_011",
"pytorch",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"license:apache-2.0",
"base_model:MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T03:10:03Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- Minirecord/Mini_synata_7b_011
- pytorch
- license:cc-by-sa-4.0
- autotrain_compatible
- endpoints_compatible
- region:us
- license:apache-2.0
model_name: Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Mini_synata_7b_011-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
megastudyedu/ME-dpo-7B-v1.1
|
megastudyedu
| 2024-01-29T03:20:12Z | 2,330 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T01:03:06Z |
---
license: cc-by-nc-nd-4.0
language:
- ko
---
# Model Card for ME-dpo-7B-v1.1
## Developed by : ๋ฉ๊ฐ์คํฐ๋๊ต์ก, ํ๋ฆฌ๋์
, ๋ง์ด์ค
## Base Model : [megastudyedu/ME-7B-v1.1](https://huggingface.co/megastudyedu/ME-7B-v1.1)
## ์ฌ์ฉ ๋ฐ์ดํฐ์
- jondurbin/bagel-v0.3 ๋ฒ์ญ ๋ฐ์ดํฐ์
(translated by megastudyedu/ME-7B-v1.0)
|
maywell/Synatra-Mixtral-8x7B
|
maywell
| 2024-01-29T03:18:22Z | 1,424 | 14 |
transformers
|
[
"transformers",
"pytorch",
"mixtral",
"text-generation",
"moe",
"conversational",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T08:21:05Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- moe
---
# **Synatra-Mixtral-8x7B**
<img src="./Synatra-Mixtral.png" alt="Synatra-Mixtral-8x7B" width="512"/>
**Synatra-Mixtral-8x7B** is a fine-tuned version of the Mixtral-8x7B-Instruct-v0.1 model using **Korean** datasets.
This model features overwhelmingly superior comprehension and inference capabilities and is licensed under apache-2.0.
# **Join Our Discord**
[Server Link](https://discord.gg/MrBt3PXdXc)
# **License**
**OPEN**, Apache-2.0.
# **Model Details**
**Base Model**
[mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
**Trained On**
A100 80GB * 6
**Instruction format**
It follows **Alpaca** format.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{input}
### Response:
{output}
```
# **Model Benchmark**
TBD
# **Implementation Code**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-Mixtral-8x7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-Mixtral-8x7B")
messages = [
{"role": "user", "content": "์์ธ์ํ์ธ์ ์๋์ฑ์ด๋ก ์ ๋ํด์ ์์ธํ ์ค๋ช
ํด์ค."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
# **Author's Message**
This model's training got sponsered by no one but support from people around Earth.
[Support Me](https://www.buymeacoffee.com/mwell)
Contact Me on Discord - **is.maywell**
Follow me on twitter: https://twitter.com/stablefluffy
|
megastudyedu/ME-7B-v1.0
|
megastudyedu
| 2024-01-29T03:18:06Z | 2,249 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T00:54:20Z |
---
license: cc-by-nc-nd-4.0
language:
- ko
---
# Model Card for ME-7B-v1.0
## Developed by : ๋ฉ๊ฐ์คํฐ๋๊ต์ก, ํ๋ฆฌ๋์
, ๋ง์ด์ค
## Base Model : [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1)
## ์ฌ์ฉ ๋ฐ์ดํฐ์
- jondurbin/bagel-v0.3 SFT ๋ฒ์ญ ๋ฐ์ดํฐ์
(translated by megastudyedu/ME-7B-v1.0)
- [megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3) ๋ฐ์ดํฐ
|
megastudyedu/ME-dpo-7B-v1.0
|
megastudyedu
| 2024-01-29T03:15:11Z | 2,349 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T00:44:45Z |
---
license: cc-by-nc-nd-4.0
language:
- ko
---
# Model Card for ME-dpo-7B-v1.0
## Developed by : ๋ฉ๊ฐ์คํฐ๋๊ต์ก, ํ๋ฆฌ๋์
, ๋ง์ด์ค
## Base Model : [megastudyedu/ME-7B-v1.0](https://huggingface.co/megastudyedu/ME-7B-v1.0)
## ์ฌ์ฉ ๋ฐ์ดํฐ์
- jondurbin/bagel-v0.3 ๋ฒ์ญ ๋ฐ์ดํฐ์
(translated by megastudyedu/ME-7B-v1.0)
|
Ardasel/123
|
Ardasel
| 2024-01-29T03:06:24Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-29T03:05:09Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/da5c0d3d-2daf-427c-8db9-3927bb230279.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: burak
---
# burak
<Gallery />
## Trigger words
You should use `burak` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Ardasel/123/tree/main) them in the Files & versions tab.
|
MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T03:01:50Z | 43 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"MRAIRR/MRAI_synatra_7B_v1",
"pytorch",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T02:50:58Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- MRAIRR/MRAI_synatra_7B_v1
- pytorch
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./MRAI_synatra_7B_v1-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
cmp-nct/Yi-VL-34B-GGUF
|
cmp-nct
| 2024-01-29T02:59:38Z | 86 | 11 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-24T19:59:30Z |
This is a quantization of Yi-VL-34B and of the visual transformer.
You currently need to apply this PR to make it work: https://github.com/ggerganov/llama.cpp/pull/5093 - this adds the additional normalization steps into the projection
Yi-Vl-34B is prone to hallucinations, to me it appears like a rushed release. Something did not go right in training.
However, while 6B was the 2nd worst llava-model I've tested, the 34B did show some strengths.
|
tyty979812/nan-tw-whisper-large-v3
|
tyty979812
| 2024-01-29T02:58:43Z | 75 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"multilingual",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-28T14:02:28Z |
---
language:
- multilingual
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
base_model: openai/whisper-large-v3
model-index:
- name: Whisper large-v3 nan-tw
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: nan-tw
split: test
args: 'config: nan-tw, split: test'
metrics:
- type: wer
value: 280.9248554913295
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper large-v3 nan-tw
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0601
- Wer: 280.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 0.2485 | 3.05 | 1000 | 0.9971 | 538.5505 |
| 0.0154 | 6.1 | 2000 | 1.0482 | 1460.5158 |
| 0.0024 | 9.15 | 3000 | 1.0330 | 261.3161 |
| 0.0014 | 12.2 | 4000 | 1.0554 | 300.3112 |
| 0.0003 | 15.24 | 5000 | 1.0601 | 280.9249 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Islanderintel/whisper-base-en
|
Islanderintel
| 2024-01-29T02:50:37Z | 62 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-25T18:23:02Z |
---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
model-index:
- name: whisper-base-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-en
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/CodeMate-v0.1-4.65bpw-h6-exl2
|
LoneStriker
| 2024-01-29T02:50:03Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"CodeMate",
"Code",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T02:41:25Z |
---
license: llama2
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- CodeMate
- Code
---
# **CodeMate-v0.1**
CodeMate-v0.1 is an intelligent programming assistant developed by [CodeMate](https://codemate.ai).
This model aims to assist users in generating high-quality code solutions for programming problems.
Please note that this model is currently in version 0.1.
## Model Details
- **Training Data:** Exclusively fine-tuned on a proprietary dataset of 1.8 billion tokens of high-quality programming problems and solutions.
- The dataset was generated manually and is internal to CodeMate.
- **Training Techniques:** The model was fine-tuned using Flash Attention 2, trained over 15 hours on 40 A100-80GB GPUs.
- A sequence length of 8096 tokens was used during training.
- **Multilingual Support:** CodeMate-v0.1 is proficient in multiple programming languages, including Python, C/C++, TypeScript, Java, and more.
## How to Get Started with the Model
Make sure to install Transformers from the main git branch:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
## How to Prompt the Model
This model accepts prompts in the Alpaca/Vicuna instruction format. For example:
```markdown
### System Prompt
You are an intelligent programming assistant.
### User Message
Implement a linked list in C++
### Assistant
...
```
## Load the Model:
To load the model, utilize the following Python script:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Initialize the model
model_path = "codemateai/CodeMate-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)
# ... generate response ...
```
## Bias, Risks, and Limitations
This model has undergone very limited testing. CodeMate recommends additional safety testing before any real-world deployments.
For more information and updates, visit the [CodeMate website](https://codemate.ai).
|
MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T02:43:19Z | 67 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"Azure99/blossom-v4-mistral-7b",
"pytorch",
"zh",
"en",
"dataset:Azure99/blossom-chat-v2",
"dataset:Azure99/blossom-math-v3",
"dataset:Azure99/blossom-wizard-v2",
"dataset:Azure99/blossom-orca-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T02:32:33Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- Azure99/blossom-v4-mistral-7b
- pytorch
- zh
- en
- dataset:Azure99/blossom-chat-v2
- dataset:Azure99/blossom-math-v3
- dataset:Azure99/blossom-wizard-v2
- dataset:Azure99/blossom-orca-v2
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./blossom-v4-mistral-7b-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
LoneStriker/CodeMate-v0.1-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-29T02:41:23Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"CodeMate",
"Code",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T02:33:52Z |
---
license: llama2
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- CodeMate
- Code
---
# **CodeMate-v0.1**
CodeMate-v0.1 is an intelligent programming assistant developed by [CodeMate](https://codemate.ai).
This model aims to assist users in generating high-quality code solutions for programming problems.
Please note that this model is currently in version 0.1.
## Model Details
- **Training Data:** Exclusively fine-tuned on a proprietary dataset of 1.8 billion tokens of high-quality programming problems and solutions.
- The dataset was generated manually and is internal to CodeMate.
- **Training Techniques:** The model was fine-tuned using Flash Attention 2, trained over 15 hours on 40 A100-80GB GPUs.
- A sequence length of 8096 tokens was used during training.
- **Multilingual Support:** CodeMate-v0.1 is proficient in multiple programming languages, including Python, C/C++, TypeScript, Java, and more.
## How to Get Started with the Model
Make sure to install Transformers from the main git branch:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
## How to Prompt the Model
This model accepts prompts in the Alpaca/Vicuna instruction format. For example:
```markdown
### System Prompt
You are an intelligent programming assistant.
### User Message
Implement a linked list in C++
### Assistant
...
```
## Load the Model:
To load the model, utilize the following Python script:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Initialize the model
model_path = "codemateai/CodeMate-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)
# ... generate response ...
```
## Bias, Risks, and Limitations
This model has undergone very limited testing. CodeMate recommends additional safety testing before any real-world deployments.
For more information and updates, visit the [CodeMate website](https://codemate.ai).
|
LoneStriker/CodeMate-v0.1-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-29T02:33:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"CodeMate",
"Code",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T02:27:11Z |
---
license: llama2
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- CodeMate
- Code
---
# **CodeMate-v0.1**
CodeMate-v0.1 is an intelligent programming assistant developed by [CodeMate](https://codemate.ai).
This model aims to assist users in generating high-quality code solutions for programming problems.
Please note that this model is currently in version 0.1.
## Model Details
- **Training Data:** Exclusively fine-tuned on a proprietary dataset of 1.8 billion tokens of high-quality programming problems and solutions.
- The dataset was generated manually and is internal to CodeMate.
- **Training Techniques:** The model was fine-tuned using Flash Attention 2, trained over 15 hours on 40 A100-80GB GPUs.
- A sequence length of 8096 tokens was used during training.
- **Multilingual Support:** CodeMate-v0.1 is proficient in multiple programming languages, including Python, C/C++, TypeScript, Java, and more.
## How to Get Started with the Model
Make sure to install Transformers from the main git branch:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
## How to Prompt the Model
This model accepts prompts in the Alpaca/Vicuna instruction format. For example:
```markdown
### System Prompt
You are an intelligent programming assistant.
### User Message
Implement a linked list in C++
### Assistant
...
```
## Load the Model:
To load the model, utilize the following Python script:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Initialize the model
model_path = "codemateai/CodeMate-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)
# ... generate response ...
```
## Bias, Risks, and Limitations
This model has undergone very limited testing. CodeMate recommends additional safety testing before any real-world deployments.
For more information and updates, visit the [CodeMate website](https://codemate.ai).
|
huangyt/Mistral-7B-Instruct-v0.2-ccp5-r16-q_v_k_o_gate_down_up-2
|
huangyt
| 2024-01-29T02:31:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T02:27:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Patcas/plbart-worksDocWorks-step-3
|
Patcas
| 2024-01-29T02:24:55Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/plbart-works",
"base_model:finetune:Patcas/plbart-works",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-29T01:24:57Z |
---
base_model: Patcas/plbart-works
tags:
- generated_from_trainer
model-index:
- name: plbart-worksDocWorks-step-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-worksDocWorks-step-3
This model is a fine-tuned version of [Patcas/plbart-works](https://huggingface.co/Patcas/plbart-works) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 0.9631 |
| No log | 2.0 | 460 | 0.8839 |
| 0.9862 | 3.0 | 690 | 0.8658 |
| 0.9862 | 4.0 | 920 | 0.8792 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
yleo/monacan-translator-fr-mon
|
yleo
| 2024-01-29T02:20:20Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:adapter:mlabonne/NeuralBeagle14-7B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-28T19:59:32Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mlabonne/NeuralBeagle14-7B
model-index:
- name: monacan-translator-fr-mon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# monacan-translator-fr-mon
This model is a fine-tuned version of [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
macadeliccc/Severus-WestLake-2x7B-v2-GGUF
|
macadeliccc
| 2024-01-29T02:14:34Z | 7 | 0 | null |
[
"gguf",
"license:cc",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T01:13:52Z |
---
license: cc
---
<pre>----Benchmark Complete----
2024-01-28 17:31:00
Time taken: 54.0 mins
Prompt Format: Mistral
Model: macadeliccc/Severus-WestLake-2x7B-v2-GGUF
Score (v2): 73.76
Parseable: 171.0
---------------
Batch completed
Time taken: 54.1 mins
---------------
</pre>
|
NickyNicky/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1
|
NickyNicky
| 2024-01-29T02:14:13Z | 80 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"en",
"es",
"ru",
"zh",
"de",
"fr",
"th",
"ca",
"it",
"ja",
"pl",
"eo",
"eu",
"vi",
"fi",
"hu",
"ar",
"nl",
"da",
"tr",
"ko",
"he",
"id",
"cs",
"bn",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T13:36:26Z |
---
library_name: transformers
tags:
- merge
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
widget:
- text: |
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
podrias escribir un codigo de ejemplo en Python<|im_end|>
<|im_start|>assistant
license: apache-2.0
---
# Model Card for Model MixLlama
<!--  -->
<!--  -->

<!-- Provide a quick summary of what the model is/does. -->
```Python
experts:
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
positive_prompts:
- ""
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1
positive_prompts:
- ""
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1
positive_prompts:
- ""
base_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
gate_mode: random # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
```
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
import torch
new_model= "NickyNicky/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1"
model = AutoModelForCausalLM.from_pretrained(#f'NickyNicky/{new_model}',
new_model,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage= True,
# use_flash_attention_2=False,
)
tokenizer = AutoTokenizer.from_pretrained(new_model,
max_length=2048,
trust_remote_code=True,
use_fast = True,
)
tokenizer.pad_token = tokenizer.eos_token
# tokenizer.padding_side = 'left'
tokenizer.padding_side = 'right'
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
escribe una historia de amor.<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=40,
repetition_penalty=1.1, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
outputs = model.generate(
generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
|
wgj0714/my-awesome-model
|
wgj0714
| 2024-01-29T02:07:00Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:davidkim205/komt-mistral-7b-v1",
"base_model:adapter:davidkim205/komt-mistral-7b-v1",
"region:us"
] | null | 2024-01-29T01:35:02Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: davidkim205/komt-mistral-7b-v1
model-index:
- name: my-awesome-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Alpaca69B/phi-2-absa-semeval-2016-3
|
Alpaca69B
| 2024-01-29T01:50:01Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T20:48:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gayanin/pubmed-mixed-noise-v3-0.4
|
gayanin
| 2024-01-29T01:48:48Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-29T00:58:27Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: pubmed-mixed-noise-v3-0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmed-mixed-noise-v3-0.4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4148 | 0.11 | 500 | 1.2099 |
| 1.2472 | 0.21 | 1000 | 1.0718 |
| 1.0935 | 0.32 | 1500 | 0.9924 |
| 1.0193 | 0.43 | 2000 | 0.9482 |
| 0.8859 | 0.54 | 2500 | 0.9335 |
| 0.9838 | 0.64 | 3000 | 0.8792 |
| 0.9195 | 0.75 | 3500 | 0.8665 |
| 0.9793 | 0.86 | 4000 | 0.8397 |
| 0.8513 | 0.96 | 4500 | 0.8215 |
| 0.7384 | 1.07 | 5000 | 0.8261 |
| 0.8117 | 1.18 | 5500 | 0.8028 |
| 0.8606 | 1.28 | 6000 | 0.7930 |
| 0.8139 | 1.39 | 6500 | 0.7873 |
| 0.84 | 1.5 | 7000 | 0.7761 |
| 0.733 | 1.61 | 7500 | 0.7728 |
| 0.7102 | 1.71 | 8000 | 0.7644 |
| 0.6857 | 1.82 | 8500 | 0.7579 |
| 0.7244 | 1.93 | 9000 | 0.7500 |
| 0.5931 | 2.03 | 9500 | 0.7515 |
| 0.6165 | 2.14 | 10000 | 0.7484 |
| 0.6681 | 2.25 | 10500 | 0.7414 |
| 0.6716 | 2.35 | 11000 | 0.7390 |
| 0.6468 | 2.46 | 11500 | 0.7387 |
| 0.5505 | 2.57 | 12000 | 0.7361 |
| 0.635 | 2.68 | 12500 | 0.7333 |
| 0.6632 | 2.78 | 13000 | 0.7297 |
| 0.6271 | 2.89 | 13500 | 0.7278 |
| 0.5843 | 3.0 | 14000 | 0.7276 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Lifan-Z/Chinese-Classic-Poem-Generator-style5x4-GPT2
|
Lifan-Z
| 2024-01-29T01:48:09Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"art",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-21T13:41:36Z |
---
license: apache-2.0
language:
- zh
tags:
- art
---
### **How to use "Lifan-Z/Chinese-Classic-Poem-Generator-style5x4-GPT2"๏ผ**
```python
# The link to the demo is at the bottom right corner of the Model Card.
import transformers
from transformers import pipeline
gpt2 = pipeline('text-generation', model = "Lifan-Z/Chinese-Classic-Poem-Generator-style5x4-GPT2")
sequences = gpt2('<|endoftext|>้จ', max_length=26, do_sample=True, top_k=20, top_p=0.9, num_return_sequences=6, eos_token_id=0)
for seq in sequences:
print(seq)
Output:
{'generated_text': '<|endoftext|>้จ ่ฟ ่ฑ ้ฃ ่ถ ๏ผ ็ ็ฌผ ่ ๅ ไบ ใ ๅด ็พก ็ซน ้ด ๆ ๏ผ ๅพ ๅพ ็ซน ๅค ๅฑฑ ใ'}
{'generated_text': '<|endoftext|>้จ ็ฝข ๅฑฑ ๆ ๆ ๏ผ ๆด ๅผ ้ ้ธ ๅผ ใ ๅค ๆทฑ ่ฑ ่ฝ ๅฐฝ ๏ผ ๆฎ ็
ง ้ฒ ไธญ ็ง ใ'}
{'generated_text': '<|endoftext|>้จ ๆปด ๆฑ ๅ ๅฒธ ๏ผ ้ฃ ้ซ ๆฅ ๆฎ ๅ ใ ไธ ๆ ๆฅ ๆฐด ็ปฟ ๏ผ ไธ ๅถ ่ฝ ้ฟ ๅฎ ใ'}
{'generated_text': '<|endoftext|>้จ ไฝ ้ฃ ๆปก ๅฐ ๏ผ ็ง ๅ ๆฅ ๆ ๆถ ใ ๅ ๆฅ ๆ ไธ ไบ ๏ผ ๅฏ ้ฃ ็ฌ ๆ ่จ ใ'}
{'generated_text': '<|endoftext|>้จ ็ฝข ็ง ้ฃ ๆฅ ๏ผ ้ฃ ๅ ๆ ๆฅ ้ฟ ใ ไธ ็ฏฑ ๆ ไฝณ ่ถฃ ๏ผ ็ฌ ๅฏน ็ฟ ๅฑ ๅผ ใ'}
{'generated_text': '<|endoftext|>้จ ไธญ ่ฑ ่ฝ ๅป ๏ผ ้ฃ ้ ๆณ ้ฃ ้ฃ ใ ็ธ ๆ ไฝ ๅฟ
ๆขฆ ๏ผ ็ฅ ๅ ไบฆ ๆณช ๅ ใ'}
```
### **References:**
The model uploaded to HuggingFace: "yuanzhoulvpi/gpt2_chinese"
https://github.com/chinese-poetry/chinese-poetry/tree/master/%E5%85%A8%E5%94%90%E8%AF%97
|
Lifan-Z/Chinese-Classic-Poem-Generator-style7x4-GPT2
|
Lifan-Z
| 2024-01-29T01:47:06Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"art",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T23:56:40Z |
---
license: apache-2.0
language:
- zh
tags:
- art
---
### **How to use "Lifan-Z/Chinese-Classic-Poem-Generator-style7x4-GPT2":**
```python
# The link to the demo is at the bottom right corner of the Model Card.
import transformers
from transformers import pipeline
gpt2 = pipeline('text-generation', model = "Lifan-Z/Chinese-Classic-Poem-Generator-style7x4-GPT2")
sequences = gpt2('<|endoftext|>้จ', max_length=34, do_sample=True, top_k=20, top_p=0.9, num_return_sequences=6, eos_token_id=0)
for seq in sequences:
print(seq)
Output:
{'generated_text': '<|endoftext|>้จ ไฝ ไบ ้พ ๆปก ๆฑ ๆน ๏ผ ้ฃ ๅจ ๅฑฑ ๅ ๆ ๆญฃ ๆต ใ ไธ ่ง ไบบ ๆฅ ๆ ้ ๆฏ ๏ผ ๅด ๅซ ่ฑ ๆจ ๆด ๆทป ็ง ใ'}
{'generated_text': '<|endoftext|>้จ ๅ ๆ ้ฃ ๆฐด ๆ ไฝ ๏ผ ็ง ๅฑฑ ไธ ๅจ ๆ ๅ ๆ ใ ๆฌฒ ็ ็ฝ ไบ ๅฝ ๅฎข ๆขฆ ๏ผ ไธ ๆฎต ้ฒ ไบบ ไธ ๆฎต ๆ ใ'}
{'generated_text': '<|endoftext|>้จ ่ฟ ้ฃ ๆฅ ๅ ๆ ๅฃฐ ๏ผ ๅฐ ็ช ๆธ
ๅค ๆขฆ ๆ ็ฉบ ใ ๅฑฑ ไธญ ไธ ่ฏ ่ฏ ไบบ ้ข ๏ผ ไธ ้ต ๅฏ ้ฃ ้ ๅฎข ่ ใ'}
{'generated_text': '<|endoftext|>้จ ๅ ๅฑฑ ๅท ๆฐด ๅ ๅนฒ ๏ผ ้ฃ ๅ ็ซน ๆ ๅฐ ๆกฅ ๅฏ ใ ไธ ็ฅ ้ ๅจ ๆบช ่ฅฟ ๅฒธ ๏ผ ไฝ ๅค ่ฝ ๅฏป ๆ ้ต ่ฑ ใ'}
{'generated_text': '<|endoftext|>้จ ไฝ ้ฃ ้จ ไธ ็ช ๆด ๏ผ ไธ ่ง ่ฑ ๆถ ๆปก ้ข ๆ ใ ่ฐ ้ ๅฑฑ ๆ ไธ ่ฝ ไฝ ๏ผ ่ ๅง ็น ๅจ ็ฟ ๅพฎ ไธญ ใ'}
{'generated_text': '<|endoftext|>้จ ๆ ๆด ็ช ไธ ไธค ๅฃฐ ๏ผ ็ฉบ ่ ็ป ่ ็ปฟ ๅฆ ่ต ใ ๅฑฑ ๅฎถ ๆช ่ง ๆข
่ฑ ๅฅฝ ๏ผ ๅช ๆ ๆฅ ้ฃ ไธ ๆฒ ๆธ
ใ'}
```
### **References:**
The model uploaded to HuggingFace: "yuanzhoulvpi/gpt2_chinese"
https://github.com/chinese-poetry/chinese-poetry/tree/master/%E5%85%A8%E5%94%90%E8%AF%97
|
Lifan-Z/Chinese-Classic-Poem-Generator-style7x8-GPT2
|
Lifan-Z
| 2024-01-29T01:46:27Z | 99 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"art",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T23:25:49Z |
---
license: apache-2.0
language:
- zh
tags:
- art
---
### **How to use "Lifan-Z/Chinese-Classic-Poem-Generator-style7x8-GPT2":**
```python
# The link to the demo is at the bottom right corner of the Model Card.
import transformers
from transformers import pipeline
gpt2 = pipeline('text-generation', model = "Lifan-Z/Chinese-Classic-Poem-Generator-style7x8-GPT2")
sequences = gpt2('<|endoftext|>้จ', max_length=66, do_sample=True, top_k=20, top_p=0.9, num_return_sequences=6, eos_token_id=0)
for seq in sequences:
print(seq)
Output:
{'generated_text': '<|endoftext|>้จ ๅ ็ง ้ด ๆปก ๅฑ ้ฃ ๏ผ ๆฐ ๅฃฐ ้ธก ็ฌ ้ ๅข ไธญ ใ ๅทฒ ๆ ไฝ ไบ ๆ ๆ ็ก ๏ผ ๅฏ ๅพ ้ฒ ๆถ ๅด ๆฉ ็ฉบ ใ ไธ ๅญฆ ไธ ่ฅฟ ๆ ๅบ ็จ ๏ผ ไฝ ็ฅ ไธ ๆ ๅณ ๅพ ไพฌ ใ ๆ ๆ ๅ ไธ ้ ไบ ๅป ๏ผ ๆด ๆ ่ฑ ่ธ ้ ๆ ๅ
ฌ ใ'}
{'generated_text': '<|endoftext|>้จ ไฝ ็ง ่ฒ ๅทฒ ่ง ็ถ ๏ผ ๅ ่ง ่ฅฟ ๆน ๆณ ็ปฎ ็ญต ใ ็ฝ ๅ ่ ไบบ ๆ ๅฒ ๆ ๏ผ ็บข ้ข ๅฐ ๅฅณ ๅ
ฑ ๅนด ๅ ใ ่ฏ ๆ ๆช ๅฟ ็ฆป ไธ ๅพ ๏ผ ้
ๅฐฝ ่ ้กป ๅ ไธ ๆจฝ ใ ไป ๅค ็ธ ๆ ๅบ ๆด ๅฅฝ ๏ผ ไธ ๅ ช ้ ไฝ ้ ๅฝ ๅคฉ ใ'}
{'generated_text': '<|endoftext|>้จ ่ฟ ๅคฉ ๆดฅ ้ฃ ่ช ่ง ๏ผ ๆ ๆฅ ๅฑฑ ่ฒ ๅ
ฅ ็ ้ ใ ไธ ๅซ ็ฝ ๅ ๅฌ ไบบ ่ ๏ผ ไธ ็ ้ป ่ฑ ้ ้
ๅฝ ใ ๅทฒ ้ฃ ้ ๆข
ๅผ ๅฐ ๅพ ๏ผ ๆด ๅญ ็บข ๆฅ ไธ ๆธ
ๆ ใ ่ฐ ่จ ไธ ้ ๆ ็ฑ ๅฐ ๏ผ ๅด ๆจ ๆฐ ็ง ๆญฃ ๆปก ่กฃ ใ'}
{'generated_text': '<|endoftext|>้จ ่ฟ ้ฃ ๆถ ๆฅ ่ฒ ๆด ๏ผ ๅคฉ ้ซ ๅฐ ่ฟฅ ๆ ไบ ่ฝป ใ ๅฑฑ ๅท ๆฐ ่ฑก ๅ
ๆ ๆ ๏ผ ๆฅผ ้ ไนพ ๅค ๆญฃ ่ช ๆ ใ ๆฐด ้
่ฟ ๅ
ๆตฎ ็ป ๆ ๏ผ ๅคฉ ่พน ้ฃ ้ธ ่ ่ก ่ฅ ใ ่ฏ ไบบ ไธ ่ด ๅนณ ็ ๆ ๏ผ ๅบ ่ฎธ ๅ ๅฆ ้ ๅ ๅฌ ใ'}
{'generated_text': '<|endoftext|>้จ ๅฃฐ ๆธ ๆญ ๆฐด ๅฆ ไธ ๏ผ ๅคฉ ๆฐ ๅ ๆถ ๆฅ ๆญฃ ่ฟ ใ ไธ ๅฐบ ้ ็ฐ ็ง ๆฐด ็ปฟ ๏ผ ไบ ๆด ๆฎ ๆผ ๅค ้ณ ๅพฎ ใ ๅฑฑ ่ฟ ๅค ไบ ไบบ ็ ๅฐ ๏ผ ่ทฏ ้ ้ซ ๆฅผ ๅฎข ่ช ๅฝ ใ ๆ ๆ ่ฅฟ ๆน ๆ ้ ๅฅฝ ๏ผ ๆปก ่ฅ ๆฅ ่ฒ ไธ ่ฐ ้ฃ ใ'}
{'generated_text': '<|endoftext|>้จ ๆถ ้ฃ ่ตท ๅฐ ไบญ ๅผ ๏ผ ไธ ็น ๆธ
ๆ ๆปก ้ข ๆฅ ใ ไธ ๅฃ ๆพ ๅฃฐ ้ธฃ ๆ ๆผ ๏ผ ๅ ๅฒฉ ไบ ๆฐ ๅจ ็ง ่ ใ ไบบ ๅฆ ๆ ่ฒ ๅฝ ไฝ ๅค ๏ผ ๅคฉ ไผผ ๆฑ ๅ
่ฝ ๅ ๅ ใ ไธ ๆ ๆญค ็ ้ฒ ่ช ๅพ ๏ผ ไธ ้กป ๅผบ ็ฌ ้ฎ ๆก ๆ ใ'}
```
### **References:**
The model uploaded to HuggingFace: "yuanzhoulvpi/gpt2_chinese"
https://github.com/chinese-poetry/chinese-poetry/tree/master/%E5%85%A8%E5%94%90%E8%AF%97
|
oGabrielFreitas/roberta-ufsm-qa
|
oGabrielFreitas
| 2024-01-29T01:38:20Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-25T03:43:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: roberta-ufsm-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-ufsm-qa
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
songfeng/output_models_ast_gpt
|
songfeng
| 2024-01-29T01:27:23Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-28T22:24:10Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: output_models_ast_gpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_models_ast_gpt
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mlx-community/flan-t5-base-mlx-4bit
|
mlx-community
| 2024-01-29T01:19:03Z | 176 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T01:18:32Z |
[Google's Flan-T5 medium](https://huggingface.co/google/flan-t5-small) converted to [MLX](https://github.com/ml-explore/mlx-examples) format and quantized up to 4 bits with group size 64.
|
CultriX/Wernicke-7B-v8
|
CultriX
| 2024-01-29T01:16:11Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"kaitchup/Mayonnaise-4in1-022",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"vanillaOVO/supermario_v2",
"FelixChao/WestSeverus-7B-DPO-v2",
"base_model:PetroGPT/WestSeverus-7B-DPO-v2",
"base_model:merge:PetroGPT/WestSeverus-7B-DPO-v2",
"base_model:kaitchup/Mayonnaise-4in1-022",
"base_model:merge:kaitchup/Mayonnaise-4in1-022",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:merge:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:vanillaOVO/supermario_v2",
"base_model:merge:vanillaOVO/supermario_v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T22:06:13Z |
---
tags:
- merge
- mergekit
- lazymergekit
- kaitchup/Mayonnaise-4in1-022
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- vanillaOVO/supermario_v2
- FelixChao/WestSeverus-7B-DPO-v2
base_model:
- kaitchup/Mayonnaise-4in1-022
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- vanillaOVO/supermario_v2
- FelixChao/WestSeverus-7B-DPO-v2
license: apache-2.0
---
# Wernicke-7B-v8
Wernicke-7B-v8 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [kaitchup/Mayonnaise-4in1-022](https://huggingface.co/kaitchup/Mayonnaise-4in1-022)
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [vanillaOVO/supermario_v2](https://huggingface.co/vanillaOVO/supermario_v2)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
## ๐งฉ Configuration
```yaml
models:
- model: CultriX/Wernicke-7B-v1
# No parameters necessary for base model
- model: kaitchup/Mayonnaise-4in1-022
parameters:
density: 0.53
weight: 0.40
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
density: 0.53
weight: 0.25
- model: vanillaOVO/supermario_v2
parameters:
density: 0.53
weight: 0.25
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.53
weight: 0.20
merge_method: dare_ties
base_model: CultriX/Wernicke-7B-v1
parameters:
int8_mask: true
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/Wernicke-7B-v8"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Jackline/CodeLlama-Code-BG
|
Jackline
| 2024-01-29T01:15:51Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"region:us"
] | null | 2024-01-29T01:12:15Z |
---
library_name: peft
base_model: codellama/CodeLlama-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.1
|
MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T01:10:17Z | 45 | 1 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T00:59:21Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- license:apache-2.0
- model-index
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./OpenHermes-2.5-neural-chat-v3-3-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
ouasdg/tinyllama-pedia
|
ouasdg
| 2024-01-29T01:08:39Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T06:44:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Patcas/plbart-worksDocWorks-step-2
|
Patcas
| 2024-01-29T01:08:34Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/plbart-works",
"base_model:finetune:Patcas/plbart-works",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-29T00:46:41Z |
---
base_model: Patcas/plbart-works
tags:
- generated_from_trainer
model-index:
- name: plbart-worksDocWorks-step-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-worksDocWorks-step-2
This model is a fine-tuned version of [Patcas/plbart-works](https://huggingface.co/Patcas/plbart-works) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.0300 |
| No log | 2.0 | 460 | 0.9436 |
| 0.9846 | 3.0 | 690 | 0.9429 |
| 0.9846 | 4.0 | 920 | 0.9441 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
gayanin/pubmed-mixed-noise-v3-0.1
|
gayanin
| 2024-01-29T01:05:03Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-29T00:37:11Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: pubmed-mixed-noise-v3-0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmed-mixed-noise-v3-0.1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.5222 | 0.11 | 500 | 0.4206 |
| 0.3862 | 0.21 | 1000 | 0.3907 |
| 0.4108 | 0.32 | 1500 | 0.3568 |
| 0.3871 | 0.43 | 2000 | 0.3415 |
| 0.3846 | 0.54 | 2500 | 0.3240 |
| 0.3313 | 0.64 | 3000 | 0.3124 |
| 0.3317 | 0.75 | 3500 | 0.3066 |
| 0.3136 | 0.86 | 4000 | 0.3049 |
| 0.3267 | 0.96 | 4500 | 0.2925 |
| 0.2816 | 1.07 | 5000 | 0.2929 |
| 0.2421 | 1.18 | 5500 | 0.2882 |
| 0.2643 | 1.28 | 6000 | 0.2872 |
| 0.2776 | 1.39 | 6500 | 0.2824 |
| 0.2854 | 1.5 | 7000 | 0.2751 |
| 0.2301 | 1.61 | 7500 | 0.2756 |
| 0.2118 | 1.71 | 8000 | 0.2770 |
| 0.2079 | 1.82 | 8500 | 0.2732 |
| 0.2474 | 1.93 | 9000 | 0.2631 |
| 0.1482 | 2.03 | 9500 | 0.2693 |
| 0.1908 | 2.14 | 10000 | 0.2656 |
| 0.2017 | 2.25 | 10500 | 0.2647 |
| 0.1687 | 2.35 | 11000 | 0.2680 |
| 0.191 | 2.46 | 11500 | 0.2630 |
| 0.1821 | 2.57 | 12000 | 0.2618 |
| 0.2301 | 2.68 | 12500 | 0.2605 |
| 0.2106 | 2.78 | 13000 | 0.2601 |
| 0.1637 | 2.89 | 13500 | 0.2617 |
| 0.1902 | 3.0 | 14000 | 0.2607 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mlx-community/flan-t5-small-mlx-4bit
|
mlx-community
| 2024-01-29T01:01:57Z | 192 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-01-29T00:12:50Z |
[Google's Flan-T5 small](https://huggingface.co/google/flan-t5-small) converted to [MLX](https://github.com/ml-explore/mlx-examples) format and quantized up to 4 bits with group size 64.
|
charleschen2022/zephyr-support-chatbot
|
charleschen2022
| 2024-01-29T00:59:31Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2024-01-29T00:54:08Z |
---
license: mit
base_model: TheBloke/zephyr-7B-alpha-GPTQ
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: zephyr-support-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
chaseadvaita/Founder_Scrape
|
chaseadvaita
| 2024-01-29T00:56:15Z | 0 | 0 | null |
[
"dataset:HuggingFaceM4/WebSight",
"license:apache-2.0",
"region:us"
] | null | 2024-01-29T00:55:28Z |
---
license: apache-2.0
datasets:
- HuggingFaceM4/WebSight
---
|
kwaikeg/kagentlms_qwen_14b_mat
|
kwaikeg
| 2024-01-29T00:53:29Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"text-generation",
"en",
"zh",
"dataset:kwaikeg/KAgentInstruct",
"dataset:kwaikeg/KAgentBench",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T00:28:25Z |
---
license: cc-by-nc-nd-4.0
datasets:
- kwaikeg/KAgentInstruct
- kwaikeg/KAgentBench
language:
- en
- zh
pipeline_tag: text-generation
---
KwaiAgents ([Github](https://github.com/KwaiKEG/KwaiAgents)) is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). The open-sourced content includes:
1. **KAgentSys-Lite**: An experimental Agent Loop implemented based on open-source search engines, browsers, time, calendar, weather, and other tools, which is only missing the memory mechanism and some search capabilities compared to the system in the paper.
2. **KAgentLMs**: A series of large language models with Agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper.
3. **KAgentInstruct**: Fine-tuned data of instructions generated by the Meta-agent in the paper.
4. **KAgentBench**: Over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.
## User Guide
### Direct usage
Tutorial can refer to [QwenLM/Qwen](https://github.com/QwenLM/Qwen)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("kwaikeg/kagentlms_qwen_7b_mat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"kwaikeg/kagentlms_qwen_14b_mat",
device_map="auto",
trust_remote_code=True
).eval()
response, history = model.chat(tokenizer, "ไฝ ๅฅฝ", history=None)
print(response)
```
### AgentLMs as service
We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
```bash
pip install vllm
pip install "fschat[model_worker,webui]"
```
To deploy KAgentLMs, you first need to start the controller in one terminal.
```bash
python -m fastchat.serve.controller
```
Secondly, you should use the following command in another terminal for single-gpu inference service deployment:
```bash
python -m fastchat.serve.vllm_worker --model-path $model_path --trust-remote-code
```
Where `$model_path` is the local path of the model downloaded. If the GPU does not support Bfloat16, you can add `--dtype half` to the command line.
Thirdly, start the REST API server in the third terminal.
```bash
python -m fastchat.serve.openai_api_server --host localhost --port 8888
```
Finally, you can use the curl command to invoke the model same as the OpenAI calling format. Here's an example:
```bash
curl http://localhost:8888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
```
### Citation
```
@article{pan2023kwaiagents,
author = {Haojie Pan and
Zepeng Zhai and
Hao Yuan and
Yaojia Lv and
Ruiji Fu and
Ming Liu and
Zhongyuan Wang and
Bing Qin
},
title = {KwaiAgents: Generalized Information-seeking Agent System with Large Language Models},
journal = {CoRR},
volume = {abs/2312.04889},
year = {2023}
}
```
|
majed316/jais-13b-chat-4bit
|
majed316
| 2024-01-29T00:52:25Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"jais",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-28T00:03:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T00:51:29Z | 40 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"openaccess-ai-collective/DPOpenHermes-7B-v2",
"en",
"dataset:teknium/openhermes",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Intel/orca_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T00:40:41Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- openaccess-ai-collective/DPOpenHermes-7B-v2
- en
- dataset:teknium/openhermes
- dataset:allenai/ultrafeedback_binarized_cleaned
- dataset:Intel/orca_dpo_pairs
- base_model:teknium/OpenHermes-2.5-Mistral-7B
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./DPOpenHermes-7B-v2-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
syedmhamudulhasan/bert-base-cased-wikitext2
|
syedmhamudulhasan
| 2024-01-29T00:49:40Z | 213 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-28T20:58:30Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1594 | 1.0 | 2249 | 7.1539 |
| 7.0244 | 2.0 | 4498 | 6.9837 |
| 6.9409 | 3.0 | 6747 | 6.9561 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.1
|
asun17904/anliR3-t5-base
|
asun17904
| 2024-01-29T00:45:20Z | 0 | 0 |
pytorch
|
[
"pytorch",
"en",
"license:mit",
"region:us"
] | null | 2024-01-28T09:16:06Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Dataset: ANLI
Round: None
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 32
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 1.0
- `lambda` = 1.0
- `alpha` = 1.0
- `beta` = 1.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|1.110|0.424|1.0|
|1.098|0.440|2.0|
|1.104|0.432|3.0|
|1.096|0.447|4.0|
|1.098|0.449|5.0|
|1.106|0.438|6.0|
|1.109|0.434|7.0|
|1.097|0.451|8.0|
|1.086|0.459|9.0|
|1.094|0.452|10.0|
|1.100|0.445|11.0|
|1.104|0.441|12.0|
|1.092|0.455|13.0|
|1.087|0.458|14.0|
|1.092|0.451|15.0|
|1.092|0.453|16.0|
|1.089|0.456|17.0|
|1.083|0.464|18.0|
|1.089|0.458|19.0|
**Test Accuracy: 0.460**
|
Antonini01/distilbert-base-uncased-lora-text-classification
|
Antonini01
| 2024-01-29T00:18:13Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-01-29T00:18:10Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9247
- Accuracy: {'accuracy': 0.886}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.3986 | {'accuracy': 0.877} |
| 0.429 | 2.0 | 500 | 0.5109 | {'accuracy': 0.885} |
| 0.429 | 3.0 | 750 | 0.4885 | {'accuracy': 0.884} |
| 0.2188 | 4.0 | 1000 | 0.6639 | {'accuracy': 0.882} |
| 0.2188 | 5.0 | 1250 | 0.6673 | {'accuracy': 0.882} |
| 0.0841 | 6.0 | 1500 | 0.7289 | {'accuracy': 0.895} |
| 0.0841 | 7.0 | 1750 | 0.8089 | {'accuracy': 0.887} |
| 0.0278 | 8.0 | 2000 | 0.8884 | {'accuracy': 0.88} |
| 0.0278 | 9.0 | 2250 | 0.9264 | {'accuracy': 0.884} |
| 0.016 | 10.0 | 2500 | 0.9247 | {'accuracy': 0.886} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-29T00:13:51Z | 77 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"Weyaxi/MetaMath-Tulpar-7b-v2-Slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-29T00:03:16Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- Weyaxi/MetaMath-Tulpar-7b-v2-Slerp
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./MetaMath-Tulpar-7b-v2-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test.2
|
Josephgflowers
| 2024-01-29T00:12:24Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T23:39:03Z |
---
license: mit
---
1.3B test of Cinder trained on math and step by step reasoning. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets.
|
RadG/code-llama-7b-text-to-sql
|
RadG
| 2024-01-28T23:58:57Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-01-28T07:11:10Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: code-llama-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF
|
MaziyarPanahi
| 2024-01-28T23:55:48Z | 64 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"base_model:MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1",
"base_model:quantized:MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1",
"conversational"
] |
text-generation
| 2024-01-28T23:45:12Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- Safetensors
- text-generation-inference
- merge
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- region:us
model_name: MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF
base_model: MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1)
## Description
[MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF) and below it, a specific filename to download, such as: MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./MetaMath-Chupacabra-7B-v2.01-Slerp-Mistral-7B-Instruct-v0.1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
LoneStriker/Tess-34B-v1.5b-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-28T23:55:45Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T23:43:16Z |
---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-34B-v1.5b was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
backnotprop/np_cr_model5
|
backnotprop
| 2024-01-28T23:52:31Z | 24 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-28T23:33:41Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_0.png"
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_1.png"
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_2.png"
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: something,minimalism,white_background,abstract,photoshop generated abstract on a white background
license: openrail++
---
# SDXL LoRA DreamBooth - backnotprop/np_cr_model5
<Gallery />
## Model description
### These are backnotprop/np_cr_model5 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`np_cr_model5.safetensors` here ๐พ](/backnotprop/np_cr_model5/blob/main/np_cr_model5.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:np_cr_model5:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`np_cr_model5_emb.safetensors` here ๐พ](/backnotprop/np_cr_model5/blob/main/np_cr_model5_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `np_cr_model5_emb` to your prompt. For example, `something,minimalism,white_background,abstract,photoshop generated abstract on a white background`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('backnotprop/np_cr_model5', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='backnotprop/np_cr_model5', filename='np_cr_model5_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Details
All [Files & versions](/backnotprop/np_cr_model5/tree/main).
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
fionazhang/fine-tune-mistral-environment
|
fionazhang
| 2024-01-28T23:49:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-28T23:13:29Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: fine-tune-mistral-environment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-mistral-environment
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0a0+git7bcf7da
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/Tess-34B-v1.5b-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-28T23:43:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T23:32:54Z |
---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-34B-v1.5b was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
Xianjun/Quokka-13b-base
|
Xianjun
| 2024-01-28T23:41:15Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2401.01089",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T21:45:53Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is optimized for Material Science by continuing pertaining on over 1 million Material science academic articles based on LLaMa-2-13b.
- **Developed by:** [UCSB]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [LLaMa-2]
- **Paper [optional]:** [https://arxiv.org/pdf/2401.01089.pdf]
- **Demo [optional]:** [More Information Needed]
## How to Get Started with the Model
```python
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch
tokenizer = LlamaTokenizer.from_pretrained("Xianjun/Quokka-13b-base")
model = LlamaForCausalLM.from_pretrained("Xianjun/Quokka-13b-base").half().to("cuda")
instruction = "How to ..."
batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda")
with torch.no_grad():
output = model.generate(**batch, max_new_tokens=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(output[0], skip_special_tokens=True)
```
## Citation
If you find Quokka useful in your research, please cite the following paper:
```latex
@inproceedings{Yang2024QuokkaAO,
title={Quokka: An Open-source Large Language Model ChatBot for Material Science},
author={Xianjun Yang and Stephen Wilson and Linda Ruth Petzold},
year={2024},
url={https://api.semanticscholar.org/CorpusID:266725577}
}
```
|
Xianjun/Quokka-13b-instruct
|
Xianjun
| 2024-01-28T23:40:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2401.01089",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T22:53:12Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is optimized for Material Science by continuing pertaining on over 1 million Material science academic articles based on LLaMa-2-13b. And further finetuned on materials science instructions.
- **Developed by:** [UCSB]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [LLaMa-2]
- **Paper [optional]:** [https://arxiv.org/pdf/2401.01089.pdf]
- **Demo [optional]:** [More Information Needed]
## How to Get Started with the Model
```python
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch
tokenizer = LlamaTokenizer.from_pretrained("Xianjun/Quokka-13b-instruct ")
model = LlamaForCausalLM.from_pretrained("Xianjun/Quokka-13b-instruct ").half().to("cuda")
instruction = "How to ..."
batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda")
with torch.no_grad():
output = model.generate(**batch, max_new_tokens=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(output[0], skip_special_tokens=True)
```
## Citation
If you find Quokka useful in your research, please cite the following paper:
```latex
@inproceedings{Yang2024QuokkaAO,
title={Quokka: An Open-source Large Language Model ChatBot for Material Science},
author={Xianjun Yang and Stephen Wilson and Linda Ruth Petzold},
year={2024},
url={https://api.semanticscholar.org/CorpusID:266725577}
}
```
|
ielabgroup/vec2text_gtr-base-st_corrector
|
ielabgroup
| 2024-01-28T23:36:25Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T03:59:18Z |
---
license: apache-2.0
---
To use the model, check vec2text repo [https://github.com/jxmorris12/vec2text](https://github.com/jxmorris12/vec2text)
# Example:
```python
from sentence_transformers import SentenceTransformer
import vec2text
import transformers
inversion_model = vec2text.models.InversionModel.from_pretrained(
"ielabgroup/vec2text_gtr-base-st_inversion"
)
model = vec2text.models.CorrectorEncoderModel.from_pretrained(
"ielabgroup/vec2text_gtr-base-st_corrector"
)
inversion_trainer = vec2text.trainers.InversionTrainer(
model=inversion_model,
train_dataset=None,
eval_dataset=None,
data_collator=transformers.DataCollatorForSeq2Seq(
inversion_model.tokenizer,
label_pad_token_id=-100,
),
)
model.config.dispatch_batches = None
corrector = vec2text.trainers.Corrector(
model=model,
inversion_trainer=inversion_trainer,
args=None,
data_collator=vec2text.collator.DataCollatorForCorrection(
tokenizer=inversion_trainer.model.tokenizer
),
)
model = SentenceTransformer('sentence-transformers/gtr-t5-base')
embeddings = model.encode([
"Jack Morris is a PhD student at Cornell Tech in New York City",
"It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity"
], convert_to_tensor=True,).to('mps')
vec2text.invert_embeddings(
embeddings=embeddings,
corrector=corrector,
num_steps=20,
)
[' Jack Morris is a PhD student at Cornell Tech in New York', 'It was the best of times, it was the worst of times, it was the epoch of incredulity, it was age of']
```
|
ielabgroup/vec2text_gtr-base-st_inversion
|
ielabgroup
| 2024-01-28T23:35:41Z | 2,543 | 2 |
transformers
|
[
"transformers",
"safetensors",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T03:46:53Z |
---
license: apache-2.0
---
To use the model, check vec2text repo [https://github.com/jxmorris12/vec2text](https://github.com/jxmorris12/vec2text)
# Example:
```python
from sentence_transformers import SentenceTransformer
import vec2text
import transformers
inversion_model = vec2text.models.InversionModel.from_pretrained(
"ielabgroup/vec2text_gtr-base-st_inversion"
)
model = vec2text.models.CorrectorEncoderModel.from_pretrained(
"ielabgroup/vec2text_gtr-base-st_corrector"
)
inversion_trainer = vec2text.trainers.InversionTrainer(
model=inversion_model,
train_dataset=None,
eval_dataset=None,
data_collator=transformers.DataCollatorForSeq2Seq(
inversion_model.tokenizer,
label_pad_token_id=-100,
),
)
model.config.dispatch_batches = None
corrector = vec2text.trainers.Corrector(
model=model,
inversion_trainer=inversion_trainer,
args=None,
data_collator=vec2text.collator.DataCollatorForCorrection(
tokenizer=inversion_trainer.model.tokenizer
),
)
model = SentenceTransformer('sentence-transformers/gtr-t5-base')
embeddings = model.encode([
"Jack Morris is a PhD student at Cornell Tech in New York City",
"It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity"
], convert_to_tensor=True,).to('mps')
vec2text.invert_embeddings(
embeddings=embeddings,
corrector=corrector,
num_steps=20,
)
[' Jack Morris is a PhD student at Cornell Tech in New York', 'It was the best of times, it was the worst of times, it was the epoch of incredulity, it was age of']
```
|
backnotprop/np_cr_model4
|
backnotprop
| 2024-01-28T23:33:01Z | 15 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-28T22:43:35Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_0.png"
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_1.png"
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_2.png"
- text: 'spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: something,minimalism,white_background,abstract,photoshop generated abstract on a white background
license: openrail++
---
# SDXL LoRA DreamBooth - backnotprop/np_cr_model4
<Gallery />
## Model description
### These are backnotprop/np_cr_model4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`np_cr_model4.safetensors` here ๐พ](/backnotprop/np_cr_model4/blob/main/np_cr_model4.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:np_cr_model4:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`np_cr_model4_emb.safetensors` here ๐พ](/backnotprop/np_cr_model4/blob/main/np_cr_model4_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `np_cr_model4_emb` to your prompt. For example, `something,minimalism,white_background,abstract,photoshop generated abstract on a white background`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('backnotprop/np_cr_model4', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='backnotprop/np_cr_model4', filename='np_cr_model4_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('spiral wave flower,minimalism,white_background,abstract,photoshop generated abstract on a white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Details
All [Files & versions](/backnotprop/np_cr_model4/tree/main).
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
weightbot/swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification-ft
|
weightbot
| 2024-01-28T23:24:05Z | 198 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:weightbot/swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification",
"base_model:finetune:weightbot/swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-28T22:43:41Z |
---
license: apache-2.0
base_model: weightbot/swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification-ft
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8773946360153256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification-ft
This model is a fine-tuned version of [weightbot/swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification](https://huggingface.co/weightbot/swin-tiny-patch4-window7-224-finetuned-plant-classification-finetuned-crops-classification) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3404
- Accuracy: 0.8774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4665 | 1.0 | 201 | 0.3881 | 0.8352 |
| 0.4054 | 2.0 | 403 | 0.3799 | 0.8582 |
| 0.3735 | 2.99 | 603 | 0.3404 | 0.8774 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LoneStriker/Tess-34B-v1.5b-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-28T23:23:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T23:14:50Z |
---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-34B-v1.5b was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
Ivan2323132/rust
|
Ivan2323132
| 2024-01-28T23:20:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-28T23:20:15Z |
Funny picture based on the video game Rust
|
AzureBlack/KitchenSink_103b-4.5bpw-6h-exl2
|
AzureBlack
| 2024-01-28T23:18:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"rp",
"erp",
"chat",
"storywriting",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-28T23:09:02Z |
---
license: llama2
language:
- en
tags:
- rp
- erp
- chat
- storywriting
---
# Kitchen Sink 103b

This model is a rotating-stack merge of three 70b models in a 103b (120 layer) configuration inspired by Venus 103b. The result of
this "frankenmerge" is a large model that contains a little bit of everything - including the kitchen sink. RP, chat, storywriting,
and instruct are all well supported. It may or may not code well - I lack the expertise to test it in that capacity, but considering
the source models, I suspect it is unlikely.
Component models for the rotating stack are
- royallab/Aetheria-L2-70B
- lizpreciatior/lzlv_70b_fp16_hf
- Sao10K/WinterGoddess-1.4x-70B-L2
Components of those models are purported to include: Nous-Hermes-Llama2-70b, Xwin-LM-7B-V0.1, Mythospice-70b, Euryale-1.3-L2-70B,
tulu-2-dpo-70b, GOAT-70B-Storytelling, Platypus2-70B-instruct, Lila-70B, SunsetBoulevard, and some private LoRAs.
As all components are based on Llama2 70b, native context length is 4k tokens. Coherency out to 8k is extremely good with rope scaling,
but starts to decline beyond that.
This model is uncensored and perfectly capable of generating objectionable material. However, it is not an explicitely-NSFW model,
and it has never "gone rogue" and tried to insert NSFW content into SFW prompts in my experience. As with any LLM, no factual claims
made by the model should be taken at face value. You know that boilerplate safety disclaimer that most professional models have?
Assume this has it too. This model is for entertainment purposes only.
FP16 and Q4_K_S GGUFs are located here: https://huggingface.co/MarsupialAI/KitchenSink_103b_GGUF_Q4KS_FP16
If you create additional quants, please let me know and I will link them here as well.
# Sample output
```
{{[INPUT]}}
Write a detailed and humorous story about a cute and fluffy bunny that goes to a Gwar concert.
{{[OUTPUT]}}
Once upon a time, in a magical land called Fluffyville, there lived a cute and fluffy little bunny named Fuzzlesworth. Fuzzlesworth
was your average, everyday bunny; he hopped around, nibbled on carrots, and generally lived a quiet, peaceful life. However,
Fuzzlesworth had a secret: he was a closet metalhead. When no one else was around, he would crank up his tiny bunny-sized speakers
and rock out to the loudest, most aggressive metal music he could find. His favorite band of all time was Gwar, the infamous
intergalactic warriors who sprayed fake blood and played insane heavy metal.
One day, Fuzzlesworth stumbled upon a flyer nailed to a tree trunk in the meadow that announced that Gwar would be performing at a
nearby human concert venue. His fluffy little bunny heart skipped a beat! He simply had to go! But, he knew his friends and family
would never understand his love for such an aggressive and intense music, so he decided to keep his plan a secret.
The big night finally arrived, and Fuzzlesworth, dressed in his best leather bunny-sized studded vest and spiked collar, hopped
his way to the concert venue. He stood outside the door, shaking with excitement and nervousness. The line of people waiting to
enter was intimidating: there were tall, muscular men and women covered in spikes, chains, and tattoos, and Fuzzlesworth felt like
a small, fluffy speck in a sea of metal.
But Fuzzlesworth was determined to see his favorite band, so he summoned up his bunny-sized courage and hopped to the front of the
line. The bouncer, a hulking behemoth of a man, looked down at Fuzzlesworth and raised a thick, hairy eyebrow.
"Umm...I'm here for the Gwar concert," Fuzzlesworth squeaked, holding up his ticket.
The bouncer stared at him incredulously for a moment, then let out a hearty laugh. "Aw, look at the little bunny in his metal gear!
Alright, go on in, but stay out of trouble, you hear?"
Fuzzlesworth's tail was wagging so hard, it could've powered a small wind turbine. He bounced his way inside the venue, taking in
the sights and sounds of the raucous crowd. The air was thick with the scent of sweat, cheap beer, and dry-ice smoke. He found a
small nook near the stage where he could see everything and waited for the show to start.
Finally, the lights dimmed, and the band took the stage. Fuzzlesworth's eyes went wide as the members of Gwar, adorned in their
grotesque, otherworldly costumes, began to thrash and wail on their instruments. He couldn't believe his fluffy little ears! He
was actually at a Gwar concert!
As the set went on, Fuzzlesworth got more and more into the music, headbanging and thrashing around in his own adorable bunny way.
The other concertgoers couldn't help but smile and chuckle at the sight of the tiny bunny moshing it up in the front row.
During one particularly intense song, Fuzzlesworth found himself lifted up by the mosh pit and deposited on stage. He looked out
at the roaring crowd, his little bunny heart pounding. In a moment of metal madness, he jumped up on the speaker stack and began
to shred an air-guitar solo, much to the amusement of the band and audience alike.
At the end of the set, the band's lead singer, Oderus Urungus, spotted Fuzzlesworth and motioned him over. Fuzzlesworth, overcome
with joy, hopped over and was lifted up onto the frontman's massive, spiked shoulder.
"Ladies and gentlemen," Oderus bellowed into the microphone, "I'd like to introduce our special guest tonight! Give it up for
Fuzzlesworth the Metal Bunny!"
The crowd went wild, cheering and clapping as Fuzzlesworth soaked up the attention. He had never felt so alive!
After the show, Fuzzlesworth got to meet the band, posing for photos and getting autographs. As he hopped back home that night,
his head still buzzing with metal and excitement, he knew that his life would never be the same. He was no longer just a small,
fluffy bunny from Fluffyville; he was Fuzzlesworth the Metal Bunny, and he wasn't afraid to rock out!
```
# Prompt format
Seems to have the strongest affinity for Alpaca prompts, but Vicuna works as well. Considering the variety of components, most
formats will probbaly work to some extent.
# WTF is a rotating-stack merge?
Inspired by Undi's experiments with stacked merges, Jeb Carter found that output quality and model initiative could be significantly
improved by reversing the model order in the stack, and then doing a linear merge between the original and reversed stacks. That is
what I did here. I created three passthrough stacked merges using the three source models (rotating the model order in each stack),
and then doing a linear merge of all three stacks. The exact merge configs can be found in the recipe.txt file.
|
Patcas/plbart-worksNoDocWorks-step-2
|
Patcas
| 2024-01-28T23:12:57Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/plbart-works",
"base_model:finetune:Patcas/plbart-works",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-28T22:53:13Z |
---
base_model: Patcas/plbart-works
tags:
- generated_from_trainer
model-index:
- name: plbart-worksNoDocWorks-step-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-worksNoDocWorks-step-2
This model is a fine-tuned version of [Patcas/plbart-works](https://huggingface.co/Patcas/plbart-works) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.0699 |
| No log | 2.0 | 460 | 1.0057 |
| 0.9714 | 3.0 | 690 | 1.0063 |
| 0.9714 | 4.0 | 920 | 1.0116 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
fionazhang/mistral-experiment-6-merge
|
fionazhang
| 2024-01-28T23:11:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T06:10:38Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-experiment-6-merge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-experiment-6
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0a0+git7bcf7da
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.