modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 527
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 06:27:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Seokeon/V14_R384_lora_pp_dog6
|
Seokeon
| 2024-01-16T10:20:16Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T10:14:14Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_pp_dog6
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
KangXen/enmr
|
KangXen
| 2024-01-16T10:19:07Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
feature-extraction
| 2024-01-16T10:18:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arsene123/lora-trained-xl
|
arsene123
| 2024-01-16T10:18:41Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-16T09:32:36Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_0.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_1.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_2.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
license: openrail++
---
# SDXL LoRA DreamBooth - arsene123/lora-trained-xl
<Gallery />
## Model description
These are arsene123/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](arsene123/lora-trained-xl/tree/main) them in the Files & versions tab.
|
Seokeon/V14_R256_lora_pp_dog6
|
Seokeon
| 2024-01-16T10:13:51Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T10:10:11Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R256_lora_pp_dog6
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_R384_lora_none_berry_bowl
|
Seokeon
| 2024-01-16T10:11:06Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T10:08:19Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks bowl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_berry_bowl
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks bowl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ryusangwon/2758_Llama-2-7b-hf
|
ryusangwon
| 2024-01-16T10:10:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:xsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-16T10:10:48Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: 2758_Llama-2-7b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2758_Llama-2-7b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jvh/Mistral-NeuralBeagle14-GEITje
|
jvh
| 2024-01-16T10:09:48Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Rijgersberg/GEITje-7B-chat-v2",
"base_model:merge:Rijgersberg/GEITje-7B-chat-v2",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:merge:mlabonne/NeuralBeagle14-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T10:06:45Z |
---
base_model:
- Rijgersberg/GEITje-7B-chat-v2
- mlabonne/NeuralBeagle14-7B
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Rijgersberg/GEITje-7B-chat-v2](https://huggingface.co/Rijgersberg/GEITje-7B-chat-v2)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Rijgersberg/GEITje-7B-chat-v2
layer_range: [0, 32]
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Rijgersberg/GEITje-7B-chat-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
jlvdoorn/whisper-large-v2-atcosim
|
jlvdoorn
| 2024-01-16T10:08:27Z | 21 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"doi:10.57967/hf/1374",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-21T06:45:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v2-atcosim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-atcosim
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0552
- Wer: 9.9694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 12500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0038 | 8.33 | 1000 | 0.0357 | 2.7829 |
| 0.001 | 16.67 | 2000 | 0.0384 | 2.0004 |
| 0.0015 | 25.0 | 3000 | 0.0373 | 31.7142 |
| 0.0001 | 33.33 | 4000 | 0.0437 | 2.3152 |
| 0.0019 | 41.67 | 5000 | 0.0446 | 7.2375 |
| 0.0 | 50.0 | 6000 | 0.0462 | 2.9033 |
| 0.0 | 58.33 | 7000 | 0.0490 | 4.3295 |
| 0.0 | 66.67 | 8000 | 0.0509 | 5.8668 |
| 0.0 | 75.0 | 9000 | 0.0524 | 7.5014 |
| 0.0 | 83.33 | 10000 | 0.0536 | 8.6405 |
| 0.0 | 91.67 | 11000 | 0.0546 | 9.5018 |
| 0.0 | 100.0 | 12000 | 0.0552 | 9.9694 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Seokeon/V14_R384_lora_none_bear_plushie
|
Seokeon
| 2024-01-16T10:07:58Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T10:04:40Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks stuffed animal
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_bear_plushie
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_R384_lora_none_grey_sloth_plushie
|
Seokeon
| 2024-01-16T10:01:06Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:57:45Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks stuffed animal
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_grey_sloth_plushie
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
facebook/audio-magnet-small
|
facebook
| 2024-01-16T09:57:18Z | 222 | 8 |
audiocraft
|
[
"audiocraft",
"magnet",
"text-to-audio",
"arxiv:2401.04577",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2024-01-10T20:16:04Z |
---
inference: true
tags:
- magnet
- audiocraft
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
---
# Audio-MAGNeT - Small - 300M
MAGNeT is a text-to-music and text-to-sound model capable of generating high-quality audio samples conditioned on text descriptions.
It is a masked generative non-autoregressive Transformer trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike prior work, MAGNeT doesn't require neither semantic token conditioning nor model cascading, and it generates all 4 codebooks using a single non-autoregressive Transformer.
MAGNeT was published in [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577) by *Alon Ziv, Itai Gat, Gael Le Lan, Tal Remez, Felix Kreuk, Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi*.
Six checkpoints are released:
- [small-10secs](https://huggingface.co/facebook/magnet-small-10secs)
- [medium-10secs](https://huggingface.co/facebook/magnet-medium-10secs)
- [small-30secs](https://huggingface.co/facebook/magnet-small-30secs)
- [medium-30secs](https://huggingface.co/facebook/magnet-medium-30secs)
- [**audio-small** (this checkpoint)](https://huggingface.co/facebook/audio-magnet-small)
- [audio-medium](https://huggingface.co/facebook/audio-magnet-medium)
## 🤗 Transformers Usage
Coming soon...
## Audiocraft Usage
You can run MAGNeT locally through the original [Audiocraft library](https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MAGNeT
from audiocraft.data.audio import audio_write
model = MAGNeT.get_pretrained("facebook/audio-magnet-small")
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MAGNeT was trained between November 2023 and January 2024.
**Model version:** This is the version 1 of the model.
**Model type:** MAGNeT consists of an EnCodec model for audio tokenization, an non-autoregressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B; and two variants: a model trained for text-to-music generation task and a model trained for text-to-audio generation.
**Paper or resources for more information:** More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577).
**Citation details:**
```
@misc{ziv2024masked,
title={Masked Audio Generation using a Single Non-Autoregressive Transformer},
author={Alon Ziv and Itai Gat and Gael Le Lan and Tal Remez and Felix Kreuk and Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2024},
eprint={2401.04577},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MAGNeT can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MAGNeT is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we used the state-of-the-art music source separation method,
namely the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs),
in order to keep only instrumental tracks. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency |
|---|---|---|---|
| facebook/magnet-small-10secs | 4.22 | 1.11 | 0.28 |
| facebook/magnet-medium-10secs | 4.61 | 1.14 | 0.28 |
| facebook/magnet-small-30secs | 4.35 | 1.17 | 0.28 |
| facebook/magnet-medium-30secs | 4.63 | 1.20 | 0.28 |
More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 16K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Tracks that include vocals have been removed from the data source using corresponding tags, and using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MAGNeT is a model developed for artificial intelligence research on music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
## Audio-MAGNeT - Sound-effect generation models
### Training datasets
The audio-magnet models were trained on the following data sources: a subset of AudioSet (Gemmeke et al., 2017), [BBC sound effects](https://sound-effects.bbcrewind.co.uk/), AudioCaps (Kim et al., 2019), Clotho v2 (Drossos et al., 2020), VGG-Sound (Chen et al., 2020), FSD50K (Fonseca et al., 2021), [Free To Use Sounds](https://www.freetousesounds.com/all-in-one-bundle/), [Sonniss Game Effects](https://sonniss.com/gameaudiogdc), [WeSoundEffects](https://wesoundeffects.com/we-sound-effects-bundle-2020/), [Paramount Motion - Odeon Cinematic Sound Effects](https://www.paramountmotion.com/odeon-sound-effects).
### Evaluation datasets
The audio-magnet models (sound effect generation) were evaluated on the [AudioCaps benchmark](https://audiocaps.github.io/).
### Evaluation results
Below are the objective metrics obtained with the released audio-magnet models on AudioCaps (consisting of 10-second long samples).
| Model | Frechet Audio Distance | KLD |
|---|---|---|
| **facebook/audio-magnet-small** | **3.21** | **1.42** |
| facebook/audio-magnet-medium | 2.32 | 1.64 |
|
facebook/magnet-medium-10secs
|
facebook
| 2024-01-16T09:56:27Z | 732 | 7 |
audiocraft
|
[
"audiocraft",
"magnet",
"text-to-audio",
"arxiv:2401.04577",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2024-01-10T15:35:43Z |
---
inference: true
tags:
- magnet
- audiocraft
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
widget:
- text: "a funky house with 80s hip hop vibes"
example_title: "Prompt 1"
- text: "a chill song with influences from lofi, chillstep and downtempo"
example_title: "Prompt 2"
- text: "a catchy beat for a podcast intro"
example_title: "Prompt 3"
---
# MAGNeT - Medium - 1.5B - 10secs
MAGNeT is a text-to-music and text-to-sound model capable of generating high-quality audio samples conditioned on text descriptions.
It is a masked generative non-autoregressive Transformer trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike prior work, MAGNeT doesn't require neither semantic token conditioning nor model cascading, and it generates all 4 codebooks using a single non-autoregressive Transformer.
MAGNeT was published in [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577) by *Alon Ziv, Itai Gat, Gael Le Lan, Tal Remez, Felix Kreuk, Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi*.
Six checkpoints are released:
- [small-10secs](https://huggingface.co/facebook/magnet-small-10secs)
- [**medium-10secs** (this checkpoint)](https://huggingface.co/facebook/magnet-medium-10secs)
- [small-30secs](https://huggingface.co/facebook/magnet-small-30secs)
- [medium-30secs](https://huggingface.co/facebook/magnet-medium-30secs)
- [audio-small](https://huggingface.co/facebook/audio-magnet-small)
- [audio-medium](https://huggingface.co/facebook/audio-magnet-medium)
## 🤗 Transformers Usage
Coming soon...
## Audiocraft Usage
You can run MAGNeT locally through the original [Audiocraft library](https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MAGNeT
from audiocraft.data.audio import audio_write
model = MAGNeT.get_pretrained("facebook/magnet-medium-10secs")
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MAGNeT was trained between November 2023 and January 2024.
**Model version:** This is the version 1 of the model.
**Model type:** MAGNeT consists of an EnCodec model for audio tokenization, an non-autoregressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B; and two variants: a model trained for text-to-music generation task and a model trained for text-to-audio generation.
**Paper or resources for more information:** More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577).
**Citation details:**
```
@misc{ziv2024masked,
title={Masked Audio Generation using a Single Non-Autoregressive Transformer},
author={Alon Ziv and Itai Gat and Gael Le Lan and Tal Remez and Felix Kreuk and Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2024},
eprint={2401.04577},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MAGNeT can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MAGNeT is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we used the state-of-the-art music source separation method,
namely the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs),
in order to keep only instrumental tracks. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency |
|---|---|---|---|
| facebook/magnet-small-10secs | 4.22 | 1.11 | 0.28 |
| **facebook/magnet-medium-10secs** | **4.61** | **1.14** | **0.28** |
| facebook/magnet-small-30secs | 4.35 | 1.17 | 0.28 |
| facebook/magnet-medium-30secs | 4.63 | 1.20 | 0.28 |
More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 16K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Tracks that include vocals have been removed from the data source using corresponding tags, and using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MAGNeT is a model developed for artificial intelligence research on music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
## Audio-MAGNeT - Sound-effect generation models
### Training datasets
The audio-magnet models were trained on the following data sources: a subset of AudioSet (Gemmeke et al., 2017), [BBC sound effects](https://sound-effects.bbcrewind.co.uk/), AudioCaps (Kim et al., 2019), Clotho v2 (Drossos et al., 2020), VGG-Sound (Chen et al., 2020), FSD50K (Fonseca et al., 2021), [Free To Use Sounds](https://www.freetousesounds.com/all-in-one-bundle/), [Sonniss Game Effects](https://sonniss.com/gameaudiogdc), [WeSoundEffects](https://wesoundeffects.com/we-sound-effects-bundle-2020/), [Paramount Motion - Odeon Cinematic Sound Effects](https://www.paramountmotion.com/odeon-sound-effects).
### Evaluation datasets
The audio-magnet models (sound effect generation) were evaluated on the [AudioCaps benchmark](https://audiocaps.github.io/).
### Evaluation results
Below are the objective metrics obtained with the released audio-magnet models on AudioCaps (consisting of 10-second long samples).
| Model | Frechet Audio Distance | KLD |
|---|---|---|
| facebook/audio-magnet-small | 3.21 | 1.42 |
| facebook/audio-magnet-medium | 2.32 | 1.64 |
|
Seokeon/V14_R384_lora_none_dog2
|
Seokeon
| 2024-01-16T09:51:15Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:48:29Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_dog2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_R256_lora_pp_dog2
|
Seokeon
| 2024-01-16T09:50:06Z | 3 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:46:25Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R256_lora_pp_dog2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_R384_lora_none_cat
|
Seokeon
| 2024-01-16T09:48:05Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:45:17Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_cat
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
devesh123098/Taxi_Car_Parking
|
devesh123098
| 2024-01-16T09:47:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-16T09:47:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_Car_Parking
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="devesh123098/Taxi_Car_Parking", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
devesh123098/q-FrozenLake-v1-4x4-noSlippery
|
devesh123098
| 2024-01-16T09:43:00Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-16T09:42:55Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="devesh123098/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Seokeon/V14_R384_lora_none_rc_car
|
Seokeon
| 2024-01-16T09:41:50Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:39:03Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_rc_car
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Felladrin/onnx-Llama-68M-Chat-v1
|
Felladrin
| 2024-01-16T09:39:54Z | 4 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"llama",
"text-generation",
"conversational",
"en",
"base_model:Felladrin/Llama-68M-Chat-v1",
"base_model:quantized:Felladrin/Llama-68M-Chat-v1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-16T09:39:26Z |
---
license: apache-2.0
language:
- en
library_name: "transformers.js"
base_model: Felladrin/Llama-68M-Chat-v1
---
INT8 ONNX version of [Felladrin/Llama-68M-Chat-v1](https://huggingface.co/Felladrin/Llama-68M-Chat-v1) to use with [Transformers.js](https://huggingface.co/docs/transformers.js).
|
LoneStriker/Yi-34Bx2-MoE-60B-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-16T09:36:48Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T09:18:17Z |
---
license: cc-by-nc-4.0
---
# Yi based MOE 2x34B with mixtral architecture
Highest score Model ranked by Open LLM Leaderboard (2024-01-11)
* [Average Score 76.72](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is an English & Chinese MoE Model , slightly different with [cloudyu/Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B), and also based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
Shrideep/Retrieval_Augmented_Generation
|
Shrideep
| 2024-01-16T09:36:43Z | 0 | 1 | null |
[
"RAG",
"Retrieval Augmented Generation",
"llama-index",
"en",
"dataset:chromadb/paul_graham_essay",
"region:us"
] | null | 2024-01-16T07:35:16Z |
---
datasets:
- chromadb/paul_graham_essay
language:
- en
tags:
- RAG
- Retrieval Augmented Generation
- llama-index
---
# Summary:
Retrieval Augmented Generation (RAG) is a technique to specialize a language model with a specific knowledge domain by feeding in relevant data so that it can give better answers.
# How does RAG works?
1. Ready/ Preprocess your input data i.e. tokenization & vectorization
2. Feed the processed data to the Language Model.
3. Indexing the stored data that matches the context of the query.
# Implementing RAG with llama-index
### 1. Load relevant data and build an index
from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
### 2. Query your data
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
# My application of RAG on ChatGPT
Check RAG.ipynb
|
Federic/lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
|
Federic
| 2024-01-16T09:35:00Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-15T10:32:00Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
model-index:
- name: lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-exl2-5.0bpw
|
notstoic
| 2024-01-16T09:31:59Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T09:26:36Z |
---
base_model: []
tags:
- mergekit
- merge
---
# Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-exl2-5.0bpw
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
An experimental merge.
Prompt format: ChatML or Mixtral-8x7B-Instruct-v0.1
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./models/Mixtral-8x7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
* [Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./models/Mixtral-8x7B-Instruct-v0.1
parameters:
density: 0.5
weight: 1.0
- model: ./models/Nous-Hermes-2-Mixtral-8x7B-DPO
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: ./models/Mixtral-8x7B-v0.1
parameters:
#normalize: false
#int8_mask: true
dtype: bfloat16
```
|
Seokeon/V14_lora_none_berry_bowl
|
Seokeon
| 2024-01-16T09:31:33Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:27:45Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks bowl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_berry_bowl
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks bowl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ
|
TheBloke
| 2024-01-16T09:31:12Z | 127 | 22 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:quantized:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-16T08:42:54Z |
---
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
inference: false
language:
- en
license: apache-2.0
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
model_creator: NousResearch
model_name: Nous Hermes 2 Mixtral 8X7B DPO
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nous Hermes 2 Mixtral 8X7B DPO - AWQ
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [Nous Hermes 2 Mixtral 8X7B DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
<!-- description start -->
## Description
This repo contains AWQ model files for [NousResearch's Nous Hermes 2 Mixtral 8X7B DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
**MIXTRAL AWQ**
This is a Mixtral AWQ model.
For AutoAWQ inference, please install AutoAWQ 0.1.8 or later.
Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git`
vLLM: version 0.2.6 is confirmed to support Mixtral AWQs.
TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!)
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
AWQ models are supported by (note that not all of these may support Mixtral models yet - see above):
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF)
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.65 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: NousResearch's Nous Hermes 2 Mixtral 8X7B DPO
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|± |0.0143|
| | |acc_norm|0.6425|± |0.0140|
|arc_easy | 0|acc |0.8657|± |0.0070|
| | |acc_norm|0.8636|± |0.0070|
|boolq | 1|acc |0.8783|± |0.0057|
|hellaswag | 0|acc |0.6661|± |0.0047|
| | |acc_norm|0.8489|± |0.0036|
|openbookqa | 0|acc |0.3440|± |0.0213|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.8324|± |0.0087|
| | |acc_norm|0.8379|± |0.0086|
|winogrande | 0|acc |0.7616|± |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|± |0.0269|
| | |acc_norm|0.2520|± |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|± |0.0193|
| | |acc_norm|0.4055|± |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|± |0.0220|
| | |acc_norm|0.5294|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.8010|± |0.0279|
| | |acc_norm|0.7913|± |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|± |0.0349|
| | |acc_norm|0.4612|± |0.0348|
|agieval_sat_math | 0|acc |0.4909|± |0.0338|
| | |acc_norm|0.4000|± |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|± |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|± |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|± |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|± |0.0263|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|± |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|± |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|± |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|± |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|± |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|± |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|± |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|± |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|± |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|± |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
567-labs/bge-base-en-v1.5-ft-quora-0.9
|
567-labs
| 2024-01-16T09:30:58Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-16T09:30:47Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7960 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
liamhvn/realistic-vision-v51
|
liamhvn
| 2024-01-16T09:30:22Z | 14 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-27T06:09:38Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Realistic Vision V5.1 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-v51"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/realistic-vision-v51)
Model link: [View model](https://stablediffusionapi.com/models/realistic-vision-v51)
Credits: [View credits](https://civitai.com/?query=Realistic%20Vision%20V5.1)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-v51",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
BOT365/tinyllama-colorist-lora
|
BOT365
| 2024-01-16T09:29:20Z | 7 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T10:01:44Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tinyllama-colorist-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-colorist-lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ssssseeee/my_awesome_billsum_model
|
ssssseeee
| 2024-01-16T09:18:10Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:lcw99/t5-base-korean-text-summary",
"base_model:finetune:lcw99/t5-base-korean-text-summary",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-16T08:34:55Z |
---
base_model: lcw99/t5-base-korean-text-summary
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [lcw99/t5-base-korean-text-summary](https://huggingface.co/lcw99/t5-base-korean-text-summary) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1454
- Rouge1: 0.1698
- Rouge2: 0.0688
- Rougel: 0.1623
- Rougelsum: 0.1632
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.1729 | 0.1723 | 0.072 | 0.1654 | 0.1656 | 19.0 |
| 1.4585 | 2.0 | 990 | 1.1454 | 0.1698 | 0.0688 | 0.1623 | 0.1632 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
pborchert/bert-ic
|
pborchert
| 2024-01-16T09:14:34Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"industry classification",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-12T10:04:55Z |
---
license: cc-by-4.0
language:
- en
pipeline_tag: fill-mask
tags:
- bert
- industry classification
library_name: transformers
widget:
- text: "Sanofi is in the [MASK] industry."
- text: "The current ratio measures [MASK]."
---
|
tmnam20/mdeberta-v3-base-wnli-1
|
tmnam20
| 2024-01-16T09:10:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T09:08:04Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-wnli-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/WNLI
type: tmnam20/VieGLUE
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.43661971830985913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-wnli-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6965
- Accuracy: 0.4366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Destiny0621/a2c-PandaReachDense-v3
|
Destiny0621
| 2024-01-16T09:10:07Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-16T09:00:59Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.14 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import os
import gymnasium as gym
import panda_gym
from huggingface_sb3 import load_from_hub, package_to_hub
from stable_baselines3 import A2C
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
from stable_baselines3.common.env_util import make_vec_env
from huggingface_hub import notebook_login
```
**Environment**
```python
env_id = "PandaReachDense-v3"
# Create the env
env = gym.make(env_id)
```
**Model**
```python
model = A2C(policy = "MultiInputPolicy",
env = env,
learning_rate = 0.0001,
n_steps = 10,
verbose=1)
```
|
rccmsu/ruadapt_mistral_7b_v0.1
|
rccmsu
| 2024-01-16T09:10:07Z | 446 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"ru",
"arxiv:2312.02598",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T15:29:20Z |
---
license: apache-2.0
language:
- ru
pipeline_tag: text-generation
---
# ruadapt_mistral_7b_v0.1
This model is a fine-tuned (embeddings, lm head) version of mistralai/Mistral-7B-v0.1 on the Russian dataset (33GB). The training lasted 0.8 epochs, after which an error occurred. Was slightly additionally trained using LoRa after that.
In short: 1) Tokenization replacement, 2) Convert to fp16, 3) Training only embeddings and lm head on 0.8 epoch, 4) Convert new layers back to bf16 and merge with original transformer in bf16, 5) Tune embeddings (modules_to_save), lm head (modules_to_save), 4 first and last layers: linear layers (lora) and layer norms(modules_to_save) on 1% of the data.
ATTENTION!!!
The metrics on various datasets are slightly worse than those of the original model.
Instruct version:
https://huggingface.co/rccmsu/ruadapt_mistral_saiga_7b_v0.1
## Model description
Russian adaptation of Mistral-7B by replacing the tokenizer.
Paper: Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //arXiv preprint arXiv:2312.02598. – 2023.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- total_eval_batch_size: 96
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: linear
- num_epochs: 2.0
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Seokeon/V14_lora_none_dog6
|
Seokeon
| 2024-01-16T09:09:59Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:06:08Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_dog6
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
tmnam20/mdeberta-v3-base-vtoc-10
|
tmnam20
| 2024-01-16T09:05:25Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T09:02:51Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vtoc-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- name: Accuracy
type: accuracy
value: 0.8088476242490442
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vtoc-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7381
- Accuracy: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7432 | 2.19 | 500 | 0.7743 | 0.7963 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
SamagraDataGov/test_mistral2
|
SamagraDataGov
| 2024-01-16T09:04:07Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T09:03:57Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
|
TheBloke
| 2024-01-16T09:01:46Z | 1,926 | 58 |
transformers
|
[
"transformers",
"gguf",
"mixtral",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:quantized:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"license:apache-2.0",
"region:us",
"conversational"
] | null | 2024-01-16T08:42:54Z |
---
base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
inference: false
language:
- en
license: apache-2.0
model-index:
- name: Nous-Hermes-2-Mixtral-8x7B-DPO
results: []
model_creator: NousResearch
model_name: Nous Hermes 2 Mixtral 8X7B DPO
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nous Hermes 2 Mixtral 8X7B DPO - GGUF
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [Nous Hermes 2 Mixtral 8X7B DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
<!-- description start -->
## Description
This repo contains GGUF format model files for [NousResearch's Nous Hermes 2 Mixtral 8X7B DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF)
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [nous-hermes-2-mixtral-8x7b-dpo.Q2_K.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q2_K.gguf) | Q2_K | 2 | 17.31 GB| 19.81 GB | significant quality loss - not recommended for most purposes |
| [nous-hermes-2-mixtral-8x7b-dpo.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q3_K_M.gguf) | Q3_K_M | 3 | 22.54 GB| 25.04 GB | very small, high quality loss |
| [nous-hermes-2-mixtral-8x7b-dpo.Q4_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf) | Q4_K_M | 4 | 28.45 GB| 30.95 GB | medium, balanced quality - recommended |
| [nous-hermes-2-mixtral-8x7b-dpo.Q5_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [nous-hermes-2-mixtral-8x7b-dpo.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q5_K_M.gguf) | Q5_K_M | 5 | 33.23 GB| 35.73 GB | large, very low quality loss - recommended |
| [nous-hermes-2-mixtral-8x7b-dpo.Q6_K.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
| [nous-hermes-2-mixtral-8x7b-dpo.Q8_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/nous-hermes-2-mixtral-8x7b-dpo.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF and below it, a specific filename to download, such as: nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: NousResearch's Nous Hermes 2 Mixtral 8X7B DPO
# Nous Hermes 2 - Mixtral 8x7B - DPO

## Model description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
This is the SFT + DPO version of Mixtral Hermes 2, we have also released an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
# Table of Contents
1. [Example Outputs](#example-outputs)
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Comparison to Mixtral-Instruct
3. [Prompt Format](#prompt-format)
4. [Inference Example Code](#inference-code)
5. [Quantized Models](#quantized-models)
## Example Outputs
### Writing Code for Data Visualization

### Writing Cyberpunk Psychedelic Poems

### Performing Backtranslation to Create Prompts from Input Text

## Benchmark Results
Nous-Hermes 2 on Mixtral 8x7B is a major improvement across the board on the benchmarks below compared to the base Mixtral model, and is the first model to beat the flagship Mixtral Finetune by MistralAI.
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5990|± |0.0143|
| | |acc_norm|0.6425|± |0.0140|
|arc_easy | 0|acc |0.8657|± |0.0070|
| | |acc_norm|0.8636|± |0.0070|
|boolq | 1|acc |0.8783|± |0.0057|
|hellaswag | 0|acc |0.6661|± |0.0047|
| | |acc_norm|0.8489|± |0.0036|
|openbookqa | 0|acc |0.3440|± |0.0213|
| | |acc_norm|0.4660|± |0.0223|
|piqa | 0|acc |0.8324|± |0.0087|
| | |acc_norm|0.8379|± |0.0086|
|winogrande | 0|acc |0.7616|± |0.0120|
```
Average: 75.70
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2402|± |0.0269|
| | |acc_norm|0.2520|± |0.0273|
|agieval_logiqa_en | 0|acc |0.4117|± |0.0193|
| | |acc_norm|0.4055|± |0.0193|
|agieval_lsat_ar | 0|acc |0.2348|± |0.0280|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.5549|± |0.0220|
| | |acc_norm|0.5294|± |0.0221|
|agieval_lsat_rc | 0|acc |0.6617|± |0.0289|
| | |acc_norm|0.6357|± |0.0294|
|agieval_sat_en | 0|acc |0.8010|± |0.0279|
| | |acc_norm|0.7913|± |0.0284|
|agieval_sat_en_without_passage| 0|acc |0.4806|± |0.0349|
| | |acc_norm|0.4612|± |0.0348|
|agieval_sat_math | 0|acc |0.4909|± |0.0338|
| | |acc_norm|0.4000|± |0.0331|
```
Average: 46.05
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6105|± |0.0355|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7182|± |0.0235|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5736|± |0.0308|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4596|± |0.0263|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3500|± |0.0214|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2500|± |0.0164|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5200|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3540|± |0.0214|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6900|± |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2535|± |0.0138|
|bigbench_snarks | 0|multiple_choice_grade|0.7293|± |0.0331|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6744|± |0.0149|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.7400|± |0.0139|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2176|± |0.0117|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1543|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5200|± |0.0289|
```
Average: 49.70
# Benchmark Comparison Charts
## GPT4All

## AGI-Eval

## BigBench Reasoning Test

## Comparison to Mixtral Instruct:
Our benchmarks show gains in many benchmarks against Mixtral Instruct v0.1, on average, beating the flagship Mixtral model.

# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True)
model = MixtralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# Quantized Models:
## All sizes of GGUF Quantizations are available here:
### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<!-- original-model-card end -->
|
tmnam20/mdeberta-v3-base-vsmec-100
|
tmnam20
| 2024-01-16T09:00:17Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:57:53Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vsmec-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSMEC
type: tmnam20/VieGLUE
config: vsmec
split: validation
args: vsmec
metrics:
- name: Accuracy
type: accuracy
value: 0.5539358600583091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsmec-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSMEC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2296
- Accuracy: 0.5539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0733 | 2.87 | 500 | 1.2329 | 0.5510 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Yi-34Bx2-MoE-60B-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-16T08:59:57Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T08:44:22Z |
---
license: cc-by-nc-4.0
---
# Yi based MOE 2x34B with mixtral architecture
Highest score Model ranked by Open LLM Leaderboard (2024-01-11)
* [Average Score 76.72](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is an English & Chinese MoE Model , slightly different with [cloudyu/Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B), and also based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
tmnam20/mdeberta-v3-base-vsmec-10
|
tmnam20
| 2024-01-16T08:57:53Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:55:22Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vsmec-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSMEC
type: tmnam20/VieGLUE
config: vsmec
split: validation
args: vsmec
metrics:
- name: Accuracy
type: accuracy
value: 0.5364431486880467
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsmec-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSMEC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3020
- Accuracy: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1704 | 2.87 | 500 | 1.3027 | 0.5335 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
567-labs/bge-base-en-v1.5-ft-quora-0.5
|
567-labs
| 2024-01-16T08:53:26Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-16T08:53:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4422 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
tmnam20/mdeberta-v3-base-vsfc-100
|
tmnam20
| 2024-01-16T08:52:46Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:49:53Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vsfc-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSFC
type: tmnam20/VieGLUE
config: vsfc
split: validation
args: vsfc
metrics:
- name: Accuracy
type: accuracy
value: 0.9456727732154138
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsfc-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSFC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2290
- Accuracy: 0.9457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1763 | 1.4 | 500 | 0.2099 | 0.9431 |
| 0.1363 | 2.79 | 1000 | 0.2278 | 0.9463 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
adamo1139/llama-33B-AEZAKMI-v2-4.65bpw-exl2
|
adamo1139
| 2024-01-16T08:51:38Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T09:30:57Z |
---
license: other
license_name: llama-1-research-license
license_link: LICENSE
---
EXL2 4.65bpw quant of LLaMa 33B fine-tuned on AEZAKMI v2 dataset.
|
golesheed/whisper-small-hi
|
golesheed
| 2024-01-16T08:47:08Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-15T11:02:00Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4300
- Wer: 34.1192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0824 | 2.44 | 1000 | 0.2958 | 35.3424 |
| 0.0218 | 4.89 | 2000 | 0.3518 | 34.1954 |
| 0.001 | 7.33 | 3000 | 0.4082 | 34.1446 |
| 0.0005 | 9.78 | 4000 | 0.4300 | 34.1192 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-vnrte-10
|
tmnam20
| 2024-01-16T08:41:58Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:40:02Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vnrte-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VNRTE
type: tmnam20/VieGLUE
config: vnrte
split: validation
args: vnrte
metrics:
- name: Accuracy
type: accuracy
value: 0.9980873445967485
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vnrte-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VNRTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0100
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0123 | 1.28 | 500 | 0.0038 | 0.9990 |
| 0.0002 | 2.55 | 1000 | 0.0058 | 0.9987 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jeiku/Luna_3B
|
jeiku
| 2024-01-16T08:40:59Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/Bluemoon_cleaned_StableLM",
"base_model:merge:jeiku/Bluemoon_cleaned_StableLM",
"base_model:jeiku/ToxicNoRobotsRosaHermesBoros_3B",
"base_model:merge:jeiku/ToxicNoRobotsRosaHermesBoros_3B",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-16T08:25:18Z |
---
base_model:
- jeiku/ToxicNoRobotsRosaHermesBoros_3B
- jeiku/Theory_of_Mind_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B
- jeiku/ToxicNoRobotsRosaHermesBoros_3B
- jeiku/Everything_v3_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B
- jeiku/Bluemoon_cleaned_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B
- jeiku/Capybara_StableLM
- jeiku/ToxicNoRobotsRosaHermesBoros_3B
- jeiku/alpaca-cleaned_StableLM
tags:
- mergekit
- merge
---
# lower
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/ToxicNoRobotsRosaHermesBoros_3B](https://huggingface.co/jeiku/ToxicNoRobotsRosaHermesBoros_3B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/ToxicNoRobotsRosaHermesBoros_3B](https://huggingface.co/jeiku/ToxicNoRobotsRosaHermesBoros_3B) + [jeiku/Theory_of_Mind_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_StableLM)
* [jeiku/ToxicNoRobotsRosaHermesBoros_3B](https://huggingface.co/jeiku/ToxicNoRobotsRosaHermesBoros_3B) + [jeiku/Everything_v3_StableLM](https://huggingface.co/jeiku/Everything_v3_StableLM)
* [jeiku/ToxicNoRobotsRosaHermesBoros_3B](https://huggingface.co/jeiku/ToxicNoRobotsRosaHermesBoros_3B) + [jeiku/Bluemoon_cleaned_StableLM](https://huggingface.co/jeiku/Bluemoon_cleaned_StableLM)
* [jeiku/ToxicNoRobotsRosaHermesBoros_3B](https://huggingface.co/jeiku/ToxicNoRobotsRosaHermesBoros_3B) + [jeiku/Capybara_StableLM](https://huggingface.co/jeiku/Capybara_StableLM)
* [jeiku/ToxicNoRobotsRosaHermesBoros_3B](https://huggingface.co/jeiku/ToxicNoRobotsRosaHermesBoros_3B) + [jeiku/alpaca-cleaned_StableLM](https://huggingface.co/jeiku/alpaca-cleaned_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/alpaca-cleaned_StableLM
parameters:
weight: 0.1
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Capybara_StableLM
parameters:
weight: 0.1
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Everything_v3_StableLM
parameters:
weight: 0.1
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Theory_of_Mind_StableLM
parameters:
weight: 0.15
density: 1
- model: jeiku/ToxicNoRobotsRosaHermesBoros_3B+jeiku/Bluemoon_cleaned_StableLM
parameters:
weight: 0.1
density: 1
merge_method: dare_ties
base_model: jeiku/ToxicNoRobotsRosaHermesBoros_3B
parameters:
dtype: bfloat16
```
|
Seokeon/full_pp_robot_toy
|
Seokeon
| 2024-01-16T08:38:54Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T07:59:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/full_pp_robot_toy
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
wcyat/whisper-small-yue-hk-retrained
|
wcyat
| 2024-01-16T08:38:35Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:wcyat/whisper-small-yue-hk-retrained-1",
"base_model:finetune:wcyat/whisper-small-yue-hk-retrained-1",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-10T12:50:13Z |
---
base_model: wcyat/whisper-small-yue-hk-retrained-1
tags:
- generated_from_trainer
model-index:
- name: whisper-small-yue-hk-retrained-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-yue-hk-retrained-2
This model is a fine-tuned version of [wcyat/whisper-small-yue-hk-retrained-1](https://huggingface.co/wcyat/whisper-small-yue-hk-retrained-1) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2631
- eval_cer: 12.5099
- eval_runtime: 4014.1159
- eval_samples_per_second: 2.037
- eval_steps_per_second: 0.127
- epoch: 0.81
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-sst2-100
|
tmnam20
| 2024-01-16T08:38:09Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:36:18Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-sst2-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/SST2
type: tmnam20/VieGLUE
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8944954128440367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-sst2-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3880
- Accuracy: 0.8945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3414 | 0.24 | 500 | 0.3477 | 0.8681 |
| 0.2858 | 0.48 | 1000 | 0.3121 | 0.8911 |
| 0.2358 | 0.71 | 1500 | 0.3466 | 0.8807 |
| 0.2413 | 0.95 | 2000 | 0.3225 | 0.8819 |
| 0.1722 | 1.19 | 2500 | 0.3268 | 0.8933 |
| 0.1926 | 1.43 | 3000 | 0.3712 | 0.8899 |
| 0.1766 | 1.66 | 3500 | 0.3130 | 0.9014 |
| 0.1706 | 1.9 | 4000 | 0.3517 | 0.8899 |
| 0.1308 | 2.14 | 4500 | 0.3970 | 0.9014 |
| 0.1315 | 2.38 | 5000 | 0.3525 | 0.8991 |
| 0.1504 | 2.61 | 5500 | 0.3728 | 0.8968 |
| 0.1178 | 2.85 | 6000 | 0.3987 | 0.8922 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
SJ-Donald/kor-hate-sentence-large
|
SJ-Donald
| 2024-01-16T08:37:03Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"kcbert",
"kor-hate-sentence",
"sentimental-analysis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:33:03Z |
---
license: apache-2.0
tags:
- bert
- kcbert
- kor-hate-sentence
- sentimental-analysis
---
# SJ-Donald/kor-hate-sentence-large
SJ-Donald/kor-hate-sentence-large is pretrained model using follow:
## Models
* [beomi/kcbert-large](https://huggingface.co/beomi/kcbert-large)
## Datasets
* [SJ-Donald/kor-hate-sentence](https://huggingface.co/datasets/SJ-Donald/kor-hate-sentence)
## How to use
```Python
from transformers import TextClassificationPipeline, BertForSequenceClassification, AutoTokenizer+
model_name = 'SJ-Donald/kor-hate-sentence-large'
model = BertForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = TextClassificationPipeline(
model = model,
tokenizer = tokenizer,
device = 0, # cpu: -1, gpu: gpu number
return_all_scores = True,
function_to_apply = 'sigmoid'
)
for result in pipe("이딴 게임할 거면 방송 그만해라 어휴")[0]:
print(result)
{'label': 'hate', 'score': 0.016597675159573555}
{'label': 'clean', 'score': 0.9842987060546875}
```
|
tmnam20/mdeberta-v3-base-sst2-10
|
tmnam20
| 2024-01-16T08:36:17Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:34:25Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-sst2-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/SST2
type: tmnam20/VieGLUE
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8979357798165137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-sst2-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3852
- Accuracy: 0.8979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3449 | 0.24 | 500 | 0.3368 | 0.8567 |
| 0.2987 | 0.48 | 1000 | 0.3037 | 0.8716 |
| 0.2492 | 0.71 | 1500 | 0.3347 | 0.8842 |
| 0.24 | 0.95 | 2000 | 0.2953 | 0.8830 |
| 0.195 | 1.19 | 2500 | 0.3445 | 0.8842 |
| 0.1934 | 1.43 | 3000 | 0.3217 | 0.8876 |
| 0.1697 | 1.66 | 3500 | 0.3627 | 0.8876 |
| 0.1757 | 1.9 | 4000 | 0.3366 | 0.8899 |
| 0.1328 | 2.14 | 4500 | 0.4266 | 0.8876 |
| 0.1475 | 2.38 | 5000 | 0.3737 | 0.8933 |
| 0.1574 | 2.61 | 5500 | 0.3888 | 0.8911 |
| 0.1548 | 2.85 | 6000 | 0.4063 | 0.8865 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-sst2-1
|
tmnam20
| 2024-01-16T08:34:25Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:32:42Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-sst2-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/SST2
type: tmnam20/VieGLUE
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8922018348623854
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-sst2-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3789
- Accuracy: 0.8922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3138 | 0.24 | 500 | 0.3016 | 0.8761 |
| 0.2693 | 0.48 | 1000 | 0.3624 | 0.8911 |
| 0.2359 | 0.71 | 1500 | 0.3470 | 0.8739 |
| 0.2584 | 0.95 | 2000 | 0.2878 | 0.8911 |
| 0.1774 | 1.19 | 2500 | 0.3204 | 0.9048 |
| 0.1921 | 1.43 | 3000 | 0.3878 | 0.8899 |
| 0.1822 | 1.66 | 3500 | 0.3444 | 0.9002 |
| 0.1772 | 1.9 | 4000 | 0.3351 | 0.8968 |
| 0.1368 | 2.14 | 4500 | 0.3350 | 0.9060 |
| 0.1259 | 2.38 | 5000 | 0.3967 | 0.8968 |
| 0.107 | 2.61 | 5500 | 0.3937 | 0.8945 |
| 0.1371 | 2.85 | 6000 | 0.3743 | 0.8968 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Azam/corgy_dog_LoRA
|
Azam
| 2024-01-16T08:34:08Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-16T07:17:16Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - Azam/corgy_dog_LoRA
<Gallery />
## Model description
These are Azam/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Azam/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
jvh/Mistral-asst_top1_2023-GEITje
|
jvh
| 2024-01-16T08:31:48Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1",
"base_model:merge:NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1",
"base_model:Rijgersberg/GEITje-7B-chat-v2",
"base_model:merge:Rijgersberg/GEITje-7B-chat-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T17:27:37Z |
---
base_model:
- Rijgersberg/GEITje-7B-chat-v2
- NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Rijgersberg/GEITje-7B-chat-v2](https://huggingface.co/Rijgersberg/GEITje-7B-chat-v2)
* [NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1](https://huggingface.co/NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Rijgersberg/GEITje-7B-chat-v2
layer_range: [0, 32]
- model: NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1
layer_range: [0, 32]
merge_method: slerp
base_model: Rijgersberg/GEITje-7B-chat-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
tmnam20/mdeberta-v3-base-qqp-10
|
tmnam20
| 2024-01-16T08:25:03Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:23:13Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: mdeberta-v3-base-qqp-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QQP
type: tmnam20/VieGLUE
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8998268612416522
- name: F1
type: f1
value: 0.8668551515550004
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-qqp-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- Accuracy: 0.8998
- F1: 0.8669
- Combined Score: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.2833 | 0.44 | 5000 | 0.3087 | 0.8708 | 0.8217 | 0.8462 |
| 0.2702 | 0.88 | 10000 | 0.2763 | 0.8818 | 0.8421 | 0.8619 |
| 0.2269 | 1.32 | 15000 | 0.2819 | 0.8883 | 0.8469 | 0.8676 |
| 0.2182 | 1.76 | 20000 | 0.2728 | 0.8929 | 0.8599 | 0.8764 |
| 0.1682 | 2.2 | 25000 | 0.2922 | 0.8971 | 0.8613 | 0.8792 |
| 0.175 | 2.64 | 30000 | 0.2755 | 0.8981 | 0.8635 | 0.8808 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Yi-34Bx2-MoE-60B-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-16T08:22:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T08:10:22Z |
---
license: cc-by-nc-4.0
---
# Yi based MOE 2x34B with mixtral architecture
Highest score Model ranked by Open LLM Leaderboard (2024-01-11)
* [Average Score 76.72](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is an English & Chinese MoE Model , slightly different with [cloudyu/Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B), and also based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
tmnam20/mdeberta-v3-base-qnli-100
|
tmnam20
| 2024-01-16T08:21:23Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:19:38Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-qnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QNLI
type: tmnam20/VieGLUE
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8974922203917262
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-qnli-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2906
- Accuracy: 0.8975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3773 | 0.15 | 500 | 0.3870 | 0.8431 |
| 0.3547 | 0.31 | 1000 | 0.3175 | 0.8658 |
| 0.3385 | 0.46 | 1500 | 0.2986 | 0.8739 |
| 0.342 | 0.61 | 2000 | 0.2787 | 0.8845 |
| 0.3003 | 0.76 | 2500 | 0.3075 | 0.8726 |
| 0.3298 | 0.92 | 3000 | 0.2781 | 0.8807 |
| 0.2475 | 1.07 | 3500 | 0.2695 | 0.8942 |
| 0.2441 | 1.22 | 4000 | 0.2615 | 0.8940 |
| 0.249 | 1.37 | 4500 | 0.2548 | 0.8958 |
| 0.2261 | 1.53 | 5000 | 0.2588 | 0.8946 |
| 0.2348 | 1.68 | 5500 | 0.2587 | 0.8982 |
| 0.2626 | 1.83 | 6000 | 0.2581 | 0.8982 |
| 0.2463 | 1.99 | 6500 | 0.2520 | 0.8964 |
| 0.1768 | 2.14 | 7000 | 0.2795 | 0.8951 |
| 0.1768 | 2.29 | 7500 | 0.3069 | 0.8942 |
| 0.1752 | 2.44 | 8000 | 0.2783 | 0.8971 |
| 0.1687 | 2.6 | 8500 | 0.2900 | 0.8995 |
| 0.163 | 2.75 | 9000 | 0.2828 | 0.8969 |
| 0.1547 | 2.9 | 9500 | 0.2873 | 0.8980 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
greymatter-2024/tinyllama2_finetuned_chatbot_hey
|
greymatter-2024
| 2024-01-16T08:21:17Z | 15 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-01-16T05:53:18Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tinyllama2_finetuned_chatbot_hey
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama2_finetuned_chatbot_hey
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gizmo-ai/Cohere-embed-multilingual-v3.0
|
gizmo-ai
| 2024-01-16T08:15:41Z | 8 | 0 |
transformers
|
[
"transformers",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T08:15:41Z |
---
tags:
- mteb
model-index:
- name: embed-multilingual-v3.0
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.85074626865672
- type: ap
value: 41.53151744002314
- type: f1
value: 71.94656880817726
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 95.600375
- type: ap
value: 93.57882128753579
- type: f1
value: 95.59945484944305
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.794
- type: f1
value: 48.740439663130985
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 55.105000000000004
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.15653426568874
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.78876256237919
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.12873500780318
- type: mrr
value: 75.87037769863255
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.01183720167818
- type: cos_sim_spearman
value: 85.00916590717613
- type: euclidean_pearson
value: 84.072733561361
- type: euclidean_spearman
value: 85.00916590717613
- type: manhattan_pearson
value: 83.89233507343208
- type: manhattan_spearman
value: 84.87482549674115
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.09415584415584
- type: f1
value: 86.05173549773973
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.49773000165541
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.909633073998876
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 49.481
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 47.449999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 59.227
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 37.729
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 29.673
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 44.278
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 43.218
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 40.63741666666667
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 33.341
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 29.093999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 40.801
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 40.114
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 33.243
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 29.958000000000002
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 41.004000000000005
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.150000000000006
- type: f1
value: 43.69803436468346
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 88.532
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 44.105
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 70.612
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 93.9672
- type: ap
value: 90.72947025321227
- type: f1
value: 93.96271599852622
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 43.447
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.92476060191517
- type: f1
value: 94.69383758972194
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 78.8873689010488
- type: f1
value: 62.537485052253885
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.51244115669132
- type: f1
value: 72.40074466830153
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.00470746469401
- type: f1
value: 79.03758200183096
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.183215937303736
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 33.443759055792135
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.58713095176127
- type: mrr
value: 33.7326038566206
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 36.417
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 63.415
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 88.924
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 58.10997801688676
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 65.02444843766075
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 19.339000000000002
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 86.61540076033945
- type: cos_sim_spearman
value: 82.1820253476181
- type: euclidean_pearson
value: 83.73901215845989
- type: euclidean_spearman
value: 82.182021064594
- type: manhattan_pearson
value: 83.76685139192031
- type: manhattan_spearman
value: 82.14074705306663
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.62241109228789
- type: cos_sim_spearman
value: 77.62042143066208
- type: euclidean_pearson
value: 82.77237785274072
- type: euclidean_spearman
value: 77.62042142290566
- type: manhattan_pearson
value: 82.70945589621266
- type: manhattan_spearman
value: 77.57245632826351
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.8307075352031
- type: cos_sim_spearman
value: 85.15620774806095
- type: euclidean_pearson
value: 84.21956724564915
- type: euclidean_spearman
value: 85.15620774806095
- type: manhattan_pearson
value: 84.0677597021641
- type: manhattan_spearman
value: 85.02572172855729
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.33749463516592
- type: cos_sim_spearman
value: 80.01967438481185
- type: euclidean_pearson
value: 82.16884494022196
- type: euclidean_spearman
value: 80.01967218194336
- type: manhattan_pearson
value: 81.94431512413773
- type: manhattan_spearman
value: 79.81636247503731
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.2070761097028
- type: cos_sim_spearman
value: 88.92297656560552
- type: euclidean_pearson
value: 87.95961374550303
- type: euclidean_spearman
value: 88.92298798854765
- type: manhattan_pearson
value: 87.85515971478168
- type: manhattan_spearman
value: 88.8100644762342
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.48103354546488
- type: cos_sim_spearman
value: 86.91850928862898
- type: euclidean_pearson
value: 86.06766986527145
- type: euclidean_spearman
value: 86.91850928862898
- type: manhattan_pearson
value: 86.02705585360717
- type: manhattan_spearman
value: 86.86666545434721
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.30267248880148
- type: cos_sim_spearman
value: 90.08752166657892
- type: euclidean_pearson
value: 90.4697525265135
- type: euclidean_spearman
value: 90.08752166657892
- type: manhattan_pearson
value: 90.57174978064741
- type: manhattan_spearman
value: 90.212834942229
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.10616236380835
- type: cos_sim_spearman
value: 66.81483164137016
- type: euclidean_pearson
value: 68.48505128040803
- type: euclidean_spearman
value: 66.81483164137016
- type: manhattan_pearson
value: 68.46133268524885
- type: manhattan_spearman
value: 66.83684227990202
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.12768629069949
- type: cos_sim_spearman
value: 88.78683817318573
- type: euclidean_pearson
value: 88.47603251297261
- type: euclidean_spearman
value: 88.78683817318573
- type: manhattan_pearson
value: 88.46483630890225
- type: manhattan_spearman
value: 88.76593424921617
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.30886658431281
- type: mrr
value: 95.5964251797585
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 70.04599999999999
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.87524752475248
- type: cos_sim_ap
value: 96.79160651306724
- type: cos_sim_f1
value: 93.57798165137615
- type: cos_sim_precision
value: 95.42619542619542
- type: cos_sim_recall
value: 91.8
- type: dot_accuracy
value: 99.87524752475248
- type: dot_ap
value: 96.79160651306724
- type: dot_f1
value: 93.57798165137615
- type: dot_precision
value: 95.42619542619542
- type: dot_recall
value: 91.8
- type: euclidean_accuracy
value: 99.87524752475248
- type: euclidean_ap
value: 96.79160651306724
- type: euclidean_f1
value: 93.57798165137615
- type: euclidean_precision
value: 95.42619542619542
- type: euclidean_recall
value: 91.8
- type: manhattan_accuracy
value: 99.87326732673267
- type: manhattan_ap
value: 96.7574606340297
- type: manhattan_f1
value: 93.45603271983639
- type: manhattan_precision
value: 95.60669456066945
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.87524752475248
- type: max_ap
value: 96.79160651306724
- type: max_f1
value: 93.57798165137615
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.12288811917144
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.22267280169542
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.39780995606098
- type: mrr
value: 53.26826563958916
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.15118979569649
- type: cos_sim_spearman
value: 30.99428921914572
- type: dot_pearson
value: 31.151189338601924
- type: dot_spearman
value: 30.99428921914572
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 83.372
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 32.698
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.1998
- type: ap
value: 14.646205259325157
- type: f1
value: 54.96172518137252
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.176004527447645
- type: f1
value: 62.48549068096645
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 50.13767789739772
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.38016331882935
- type: cos_sim_ap
value: 75.1635976260804
- type: cos_sim_f1
value: 69.29936305732484
- type: cos_sim_precision
value: 66.99507389162561
- type: cos_sim_recall
value: 71.76781002638522
- type: dot_accuracy
value: 86.38016331882935
- type: dot_ap
value: 75.16359359202374
- type: dot_f1
value: 69.29936305732484
- type: dot_precision
value: 66.99507389162561
- type: dot_recall
value: 71.76781002638522
- type: euclidean_accuracy
value: 86.38016331882935
- type: euclidean_ap
value: 75.16360246558416
- type: euclidean_f1
value: 69.29936305732484
- type: euclidean_precision
value: 66.99507389162561
- type: euclidean_recall
value: 71.76781002638522
- type: manhattan_accuracy
value: 86.27883411813792
- type: manhattan_ap
value: 75.02872038741897
- type: manhattan_f1
value: 69.29256284011403
- type: manhattan_precision
value: 68.07535641547861
- type: manhattan_recall
value: 70.55408970976254
- type: max_accuracy
value: 86.38016331882935
- type: max_ap
value: 75.16360246558416
- type: max_f1
value: 69.29936305732484
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.39729110878255
- type: cos_sim_ap
value: 86.48560260020555
- type: cos_sim_f1
value: 79.35060602690982
- type: cos_sim_precision
value: 76.50632549496105
- type: cos_sim_recall
value: 82.41453649522637
- type: dot_accuracy
value: 89.39729110878255
- type: dot_ap
value: 86.48559829915334
- type: dot_f1
value: 79.35060602690982
- type: dot_precision
value: 76.50632549496105
- type: dot_recall
value: 82.41453649522637
- type: euclidean_accuracy
value: 89.39729110878255
- type: euclidean_ap
value: 86.48559993122497
- type: euclidean_f1
value: 79.35060602690982
- type: euclidean_precision
value: 76.50632549496105
- type: euclidean_recall
value: 82.41453649522637
- type: manhattan_accuracy
value: 89.36042224550782
- type: manhattan_ap
value: 86.47238558562499
- type: manhattan_f1
value: 79.24500641378047
- type: manhattan_precision
value: 75.61726236273344
- type: manhattan_recall
value: 83.23837388358484
- type: max_accuracy
value: 89.39729110878255
- type: max_ap
value: 86.48560260020555
- type: max_f1
value: 79.35060602690982
---
# Cohere embed-multilingual-v3.0
This repository contains the tokenizer for the Cohere `embed-multilingual-v3.0` model. See our blogpost [Cohere Embed V3](https://txt.cohere.com/introducing-embed-v3/) for more details on this model.
You can use the embedding model either via the Cohere API, AWS SageMaker or in your private deployments.
## Usage Cohere API
The following code snippet shows the usage of the Cohere API. Install the cohere SDK via:
```
pip install -U cohere
```
Get your free API key on: www.cohere.com
```python
# This snippet shows and example how to use the Cohere Embed V3 models for semantic search.
# Make sure to have the Cohere SDK in at least v4.30 install: pip install -U cohere
# Get your API key from: www.cohere.com
import cohere
import numpy as np
cohere_key = "{YOUR_COHERE_API_KEY}" #Get your API key from www.cohere.com
co = cohere.Client(cohere_key)
docs = ["The capital of France is Paris",
"PyTorch is a machine learning framework based on the Torch library.",
"The average cat lifespan is between 13-17 years"]
#Encode your documents with input type 'search_document'
doc_emb = co.embed(docs, input_type="search_document", model="embed-multilingual-v3.0").embeddings
doc_emb = np.asarray(doc_emb)
#Encode your query with input type 'search_query'
query = "What is Pytorch"
query_emb = co.embed([query], input_type="search_query", model="embed-multilingual-v3.0").embeddings
query_emb = np.asarray(query_emb)
query_emb.shape
#Compute the dot product between query embedding and document embedding
scores = np.dot(query_emb, doc_emb.T)[0]
#Find the highest scores
max_idx = np.argsort(-scores)
print(f"Query: {query}")
for idx in max_idx:
print(f"Score: {scores[idx]:.2f}")
print(docs[idx])
print("--------")
```
## Usage AWS SageMaker
The embedding model can be privately deployed in your AWS Cloud using our [AWS SageMaker marketplace offering](https://aws.amazon.com/marketplace/pp/prodview-z6huxszcqc25i). It runs privately in your VPC, with latencies as low as 5ms for query encoding.
## Usage AWS Bedrock
Soon the model will also be available via AWS Bedrock. Stay tuned
## Private Deployment
You want to run the model on your own hardware? [Contact Sales](https://cohere.com/contact-sales) to learn more.
## Supported Languages
This model was trained on nearly 1B English training pairs and nearly 0.5B Non-English training pairs from 100+ languages.
Evaluation results can be found in the [Embed V3.0 Benchmark Results spreadsheet](https://docs.google.com/spreadsheets/d/1w7gnHWMDBdEUrmHgSfDnGHJgVQE5aOiXCCwO3uNH_mI/edit?usp=sharing).
|
darshan8950/falcon-7b-sharded-bf16-finetuned
|
darshan8950
| 2024-01-16T08:13:07Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-01-15T20:22:51Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: falcon-7b-sharded-bf16-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sharded-bf16-finetuned
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-mrpc-1
|
tmnam20
| 2024-01-16T08:12:08Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:10:04Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: mdeberta-v3-base-mrpc-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MRPC
type: tmnam20/VieGLUE
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8792452830188678
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-mrpc-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3835
- Accuracy: 0.8431
- F1: 0.8792
- Combined Score: 0.8612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Bhriganka/blue-sport-car-npx
|
Bhriganka
| 2024-01-16T08:12:03Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T08:07:42Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Blue-Sport-Car Dreambooth model trained by Bhriganka following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 213450007001
Sample pictures of this concept:


|
NBA55/llama2-qlora-finetunined-4-bit-prev-and-4.14k-learning-rate-3e4
|
NBA55
| 2024-01-16T08:11:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-16T08:11:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
alibidaran/sql_generator
|
alibidaran
| 2024-01-16T08:10:59Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:b-mc2/sql-create-context",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-29T13:11:33Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: sql_generator
results: []
datasets:
- b-mc2/sql-create-context
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sql_generator
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.0761 | 1.81 | 1000 | 1.4913 |
| 1.4004 | 3.62 | 2000 | 1.3671 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
alibidaran/llama-2-7b-sql_generator_2
|
alibidaran
| 2024-01-16T08:10:18Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"dataset:b-mc2/sql-create-context",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T13:34:02Z |
---
license: apache-2.0
datasets:
- b-mc2/sql-create-context
language:
- en
tags:
- code
---
|
tmnam20/mdeberta-v3-base-mnli-100
|
tmnam20
| 2024-01-16T08:10:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:08:10Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-mnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MNLI
type: tmnam20/VieGLUE
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8412327095199349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-mnli-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4764
- Accuracy: 0.8412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5194 | 0.41 | 5000 | 0.4901 | 0.8127 |
| 0.4861 | 0.81 | 10000 | 0.4713 | 0.8114 |
| 0.3993 | 1.22 | 15000 | 0.4508 | 0.8285 |
| 0.3867 | 1.63 | 20000 | 0.4546 | 0.8302 |
| 0.3496 | 2.04 | 25000 | 0.4765 | 0.8295 |
| 0.3376 | 2.44 | 30000 | 0.4828 | 0.8315 |
| 0.3104 | 2.85 | 35000 | 0.4852 | 0.8314 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
alibidaran/llama-2-7b-virtual_doctor
|
alibidaran
| 2024-01-16T08:04:10Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"en",
"dataset:jayantdocplix/medical_dataset",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-06T13:45:03Z |
---
license: apache-2.0
language:
- en
tags:
- medical
datasets:
- jayantdocplix/medical_dataset
---
# This model is llama2-based and acts as a doctor who can detect diseases and recommend various prescriptions.
|
alibidaran/Farsi-llama2
|
alibidaran
| 2024-01-16T08:03:03Z | 9 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"fa",
"dataset:sinarashidi/alpaca-persian",
"doi:10.57967/hf/2254",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T10:33:28Z |
---
license: apache-2.0
datasets:
- sinarashidi/alpaca-persian
language:
- fa
---
# This model is fined-tuned version of llama2 for Persian Alpaca style prompts
|
rachittshah/mistral-function-calling-7b
|
rachittshah
| 2024-01-16T08:03:03Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-v0.1",
"gorilla-llm/gorilla-openfunctions-v1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T07:59:36Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- gorilla-llm/gorilla-openfunctions-v1
---
# mistral-function-calling-7b
mistral-function-calling-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [gorilla-llm/gorilla-openfunctions-v1](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v1)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
parameters:
density: .5
weight: .7
- model: gorilla-llm/gorilla-openfunctions-v1
parameters:
density: .5
weight: 1
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "rachittshah/mistral-function-calling-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Seokeon/full_pp_rc_car
|
Seokeon
| 2024-01-16T07:58:44Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T06:50:50Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/full_pp_rc_car
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
fuyu-quant/ibl-regression-ver2-all
|
fuyu-quant
| 2024-01-16T07:54:40Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-16T07:54:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
rccmsu/ruadapt_mistral_saiga_7b_v0.1
|
rccmsu
| 2024-01-16T07:52:01Z | 657 | 4 |
peft
|
[
"peft",
"text-generation",
"ru",
"arxiv:2312.02598",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-15T15:54:12Z |
---
library_name: peft
license: apache-2.0
language:
- ru
pipeline_tag: text-generation
---
Use in the same way as IlyaGusev/saiga2_7b_lora.
Up to 60% faster generation and 35% training (on identical russian text sequences!) with HF because of different tokenizer.
rccmsu/ruadapt_mistral_7b_v0.1 trained on saiga corpuses.
The quality is slightly worse than the IlyaGusev/saiga_mistral_7b_lora, but faster because of tokenizer.
WARNING! Load tokenizer as AutoTokenizer.from_pretrained(model_path, use_fast=True)
Paper: Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //arXiv preprint arXiv:2312.02598. – 2023.
|
LoneStriker/Yi-34Bx2-MoE-60B-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-16T07:46:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T07:36:53Z |
---
license: cc-by-nc-4.0
---
# Yi based MOE 2x34B with mixtral architecture
Highest score Model ranked by Open LLM Leaderboard (2024-01-11)
* [Average Score 76.72](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is an English & Chinese MoE Model , slightly different with [cloudyu/Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B), and also based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
hardikJ11/bart-base-finetuned-cnn-news
|
hardikJ11
| 2024-01-16T07:45:05Z | 12 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-16T06:17:42Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: bart-base-finetuned-cnn-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 21.8948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-cnn-news
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8560
- Rouge1: 21.8948
- Rouge2: 9.7157
- Rougel: 17.9348
- Rougelsum: 20.5347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.7005 | 1.0 | 718 | 2.9872 | 21.7279 | 9.0406 | 17.392 | 20.0627 |
| 2.937 | 2.0 | 1436 | 2.8590 | 21.3056 | 8.5254 | 17.2338 | 20.0403 |
| 2.2642 | 3.0 | 2154 | 2.6744 | 21.277 | 9.6162 | 17.7775 | 20.1688 |
| 1.5774 | 4.0 | 2872 | 2.7020 | 21.7458 | 9.846 | 18.1649 | 20.7067 |
| 1.0174 | 5.0 | 3590 | 2.8560 | 21.8948 | 9.7157 | 17.9348 | 20.5347 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
radames/sd-21-DPO-LoRA
|
radames
| 2024-01-16T07:44:11Z | 144 | 6 |
diffusers
|
[
"diffusers",
"text-to-image",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"region:us"
] |
text-to-image
| 2024-01-07T20:04:09Z |
---
library_name: diffusers
pipeline_tag: text-to-image
inference: true
base_model: stabilityai/stable-diffusion-2-1
---
# DPO LoRA Stable Diffusion v2-1
Model trained with LoRA implementation of Diffusion DPO Read more [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/diffusion_dpo)
Base Model: https://huggingface.co/stabilityai/stable-diffusion-2-1
## Running with [🧨 diffusers library](https://github.com/huggingface/diffusers)
```python
from diffusers import DiffusionPipeline
from diffusers.utils import make_image_grid
import torch
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/sd-turbo", # SD Turbo is a destilled version of Stable Diffusion 2.1
# "stabilityai/stable-diffusion-2-1", # for the original stable diffusion 2.1 model
torch_dtype=torch.float16, variant="fp16"
)
pipe.to("cuda")
pipe.load_lora_weights("radames/sd-21-DPO-LoRA", adapter_name="dpo-lora-sd21")
pipe.set_adapters(["dpo-lora-sd21"], adapter_weights=[1.0]) # you can play with adapter_weights to increase the effect of the LoRA model
seed = 123123
prompt = "portrait headshot professional of elon musk"
negative_prompt = "3d render, cartoon, drawing, art, low light"
generator = torch.Generator().manual_seed(seed)
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=512,
height=512,
num_inference_steps=2,
generator=generator,
guidance_scale=1.0,
num_images_per_prompt=4
).images
make_image_grid(images, 1, 4)
```
## Guidance Scale vs LoRA weights

## Examples
Left Withoud DPO right with DPO LoRA
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/R8E0hRpWIE6OhhtvgJeEU.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/Eg4LbyxCfhmsk2INzqODw.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/GD7KumSCNweBWMJ1TArI-.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/SO7QoA9lZJY9hI0U4fBLy.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/ZWbQwIQ5OklEgF9RW581R.png style="max-width: 60rem;">
|
s3nh/Mistral-7B-Evol-Instruct-Chinese-GGUF
|
s3nh
| 2024-01-16T07:43:25Z | 15 | 6 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T10:38:48Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/s3nh/Mistral-7B-Evol-Instruct-Chinese).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
Me: Ok, you can see the video [https://youtu.be/q8GhYRlQ1dU](https://youtu.be/q8GhYRlQ1dU) I did yesterday, it may help you understand.
# Original model card
|
damojay/taml
|
damojay
| 2024-01-16T07:38:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-16T07:37:42Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
FelixChao/NinjaDolphin-7B
|
FelixChao
| 2024-01-16T07:25:18Z | 1,375 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"beowolx/CodeNinja-1.0-OpenChat-7B",
"beowolx/MistralHermes-CodePro-7B-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T14:14:49Z |
---
license: apache-2.0
tags:
- merge
- beowolx/CodeNinja-1.0-OpenChat-7B
- beowolx/MistralHermes-CodePro-7B-v1
model-index:
- name: NinjaDolphin-7B
results:
- task:
type: text-generation # Required. Example: automatic-speech-recognition
dataset:
type: openai_humaneval # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: HumanEval # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: pass@1 # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 52.4390243902439 # Required. Example: 20.90
name: pass@1 # Optional. Example: Test WER
verified: false
---
# NinjaDolphin-7B
NinjaDolphin-7B is a merge of the following models using:
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
Improving coding ability from [FelixChao/WizardDolphin-7B](https://huggingface.co/FelixChao/WizardDolphin-7B).
## HumanEval (uninstructed and no post-process)
| Metric | Value |
| --- | --- |
| humaneval-python |52.4390243902439|

## 🧩 Configuration
```yaml
models:
- model: FelixChao/WizardDolphin-7B
- model: beowolx/CodeNinja-1.0-OpenChat-7B
parameters:
density: 0.53
weight: 0.3
- model: beowolx/MistralHermes-CodePro-7B-v1
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: FelixChao/WizardDolphin-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "FelixChao/NinjaDolphin-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__NinjaDolphin-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.74|
|AI2 Reasoning Challenge (25-Shot)|65.61|
|HellaSwag (10-Shot) |85.35|
|MMLU (5-Shot) |64.43|
|TruthfulQA (0-shot) |54.94|
|Winogrande (5-shot) |80.27|
|GSM8k (5-shot) |67.85|
|
brucethemoose/Capybara-Fixed-Temp
|
brucethemoose
| 2024-01-16T07:15:10Z | 8 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"sft",
"Yi-34B-200K",
"eng",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T06:19:35Z |
---
language:
- eng
tags:
- sft
- Yi-34B-200K
license:
- mit
datasets:
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
---
## **Nous-Capybara-34B V1.9**
**This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!**
**First 34B Nous model and first 200K context length Nous model!**
The Capybara series is the first Nous collection of models made by fine-tuning mostly on data created by Nous in-house.
We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly regarded datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin).
While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples.
## Process of creation and special thank yous!
This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations.
Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project.
## Thank you to those of you that have indirectly contributed!
While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis.
The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project.
Datasets in Blue are in-house curations that previously existed prior to Capybara.

## Prompt Format
The reccomended model usage is:
Prefix: ``USER:``
Suffix: ``ASSISTANT:``
Stop token: ``</s>``
## Mutli-Modality!
- We currently have a Multi-modal model based on Capybara V1.9!
https://huggingface.co/NousResearch/Obsidian-3B-V0.5
it is currently only available as a 3B sized model but larger versions coming!
## Notable Features:
- Uses Yi-34B model as the base which is trained for 200K context length!
- Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!)
- Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.)
- Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house)
- Ability to recall information upto late 2022 without internet.
- Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts.
## Example Outputs from Capybara V1.9 7B version! (examples from 34B coming soon):



## Benchmarks! (Coming soon!)
## Future model sizes
Capybara V1.9 now currently has a 3B, 7B and 34B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Dataset contamination.
We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found.
We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.
The following are benchmarks we checked for contamination against our dataset:
- HumanEval
- AGIEval
- TruthfulQA
- MMLU
- GPT4All
|
tmnam20/bert-base-multilingual-cased-vnrte-10
|
tmnam20
| 2024-01-16T07:13:53Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:12:41Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vnrte-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VNRTE
type: tmnam20/VieGLUE
config: vnrte
split: validation
args: vnrte
metrics:
- name: Accuracy
type: accuracy
value: 0.999681224099458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vnrte-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VNRTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0044 | 1.28 | 500 | 0.0083 | 0.9978 |
| 0.0001 | 2.55 | 1000 | 0.0026 | 0.9994 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nullne/ppo-Huggy
|
nullne
| 2024-01-16T07:12:48Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-16T07:12:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nullne/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tmnam20/bert-base-multilingual-cased-qnli-10
|
tmnam20
| 2024-01-16T07:12:41Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:11:29Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-qnli-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QNLI
type: tmnam20/VieGLUE
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.891085484166209
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-qnli-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3198
- Accuracy: 0.8911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4249 | 0.15 | 500 | 0.3656 | 0.8464 |
| 0.3989 | 0.31 | 1000 | 0.3319 | 0.8581 |
| 0.3557 | 0.46 | 1500 | 0.3096 | 0.8688 |
| 0.3257 | 0.61 | 2000 | 0.3055 | 0.8700 |
| 0.3403 | 0.76 | 2500 | 0.2893 | 0.8786 |
| 0.311 | 0.92 | 3000 | 0.2919 | 0.8841 |
| 0.2424 | 1.07 | 3500 | 0.2974 | 0.8838 |
| 0.2663 | 1.22 | 4000 | 0.2966 | 0.8845 |
| 0.2486 | 1.37 | 4500 | 0.2904 | 0.8828 |
| 0.2442 | 1.53 | 5000 | 0.2919 | 0.8810 |
| 0.252 | 1.68 | 5500 | 0.2781 | 0.8880 |
| 0.2514 | 1.83 | 6000 | 0.2754 | 0.8867 |
| 0.254 | 1.99 | 6500 | 0.2692 | 0.8882 |
| 0.1632 | 2.14 | 7000 | 0.3349 | 0.8867 |
| 0.1835 | 2.29 | 7500 | 0.3126 | 0.8902 |
| 0.1725 | 2.44 | 8000 | 0.3145 | 0.8902 |
| 0.1624 | 2.6 | 8500 | 0.3272 | 0.8876 |
| 0.1751 | 2.75 | 9000 | 0.3240 | 0.8882 |
| 0.1653 | 2.9 | 9500 | 0.3235 | 0.8900 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-sst2-1
|
tmnam20
| 2024-01-16T07:10:18Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:09:04Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-sst2-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/SST2
type: tmnam20/VieGLUE
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8841743119266054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-sst2-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4333
- Accuracy: 0.8842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3821 | 0.24 | 500 | 0.3799 | 0.8314 |
| 0.3198 | 0.48 | 1000 | 0.4079 | 0.8417 |
| 0.272 | 0.71 | 1500 | 0.3721 | 0.8670 |
| 0.2847 | 0.95 | 2000 | 0.3885 | 0.8567 |
| 0.1893 | 1.19 | 2500 | 0.4329 | 0.8589 |
| 0.2124 | 1.43 | 3000 | 0.4133 | 0.8532 |
| 0.2208 | 1.66 | 3500 | 0.3665 | 0.8773 |
| 0.2219 | 1.9 | 4000 | 0.4164 | 0.8601 |
| 0.1562 | 2.14 | 4500 | 0.4350 | 0.8635 |
| 0.1399 | 2.38 | 5000 | 0.4571 | 0.8761 |
| 0.1399 | 2.61 | 5500 | 0.4346 | 0.8796 |
| 0.1403 | 2.85 | 6000 | 0.4325 | 0.8819 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-vsfc-1
|
tmnam20
| 2024-01-16T07:03:07Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:01:59Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vsfc-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSFC
type: tmnam20/VieGLUE
config: vsfc
split: validation
args: vsfc
metrics:
- name: Accuracy
type: accuracy
value: 0.936197094125079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vsfc-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VSFC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2403
- Accuracy: 0.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1942 | 1.4 | 500 | 0.2416 | 0.9242 |
| 0.1297 | 2.79 | 1000 | 0.2395 | 0.9337 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-vtoc-1
|
tmnam20
| 2024-01-16T07:01:59Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:00:47Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vtoc-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- name: Accuracy
type: accuracy
value: 0.8083014746040416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vtoc-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6734
- Accuracy: 0.8083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4828 | 2.19 | 500 | 0.7023 | 0.8012 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-qnli-100
|
tmnam20
| 2024-01-16T07:00:47Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:59:30Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-qnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QNLI
type: tmnam20/VieGLUE
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8885227896760022
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-qnli-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3284
- Accuracy: 0.8885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4041 | 0.15 | 500 | 0.3611 | 0.8488 |
| 0.3784 | 0.31 | 1000 | 0.3232 | 0.8603 |
| 0.364 | 0.46 | 1500 | 0.3128 | 0.8642 |
| 0.364 | 0.61 | 2000 | 0.3020 | 0.8702 |
| 0.3236 | 0.76 | 2500 | 0.2960 | 0.8768 |
| 0.3475 | 0.92 | 3000 | 0.2895 | 0.8816 |
| 0.252 | 1.07 | 3500 | 0.3019 | 0.8812 |
| 0.261 | 1.22 | 4000 | 0.2783 | 0.8893 |
| 0.2718 | 1.37 | 4500 | 0.2880 | 0.8832 |
| 0.2407 | 1.53 | 5000 | 0.3017 | 0.8812 |
| 0.254 | 1.68 | 5500 | 0.2775 | 0.8827 |
| 0.2611 | 1.83 | 6000 | 0.2837 | 0.8812 |
| 0.257 | 1.99 | 6500 | 0.2816 | 0.8852 |
| 0.1645 | 2.14 | 7000 | 0.3323 | 0.8845 |
| 0.1679 | 2.29 | 7500 | 0.3568 | 0.8825 |
| 0.1643 | 2.44 | 8000 | 0.3203 | 0.8889 |
| 0.1662 | 2.6 | 8500 | 0.3240 | 0.8878 |
| 0.1558 | 2.75 | 9000 | 0.3302 | 0.8856 |
| 0.1614 | 2.9 | 9500 | 0.3299 | 0.8872 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
afrideva/phi-2-code-instruct-GGUF
|
afrideva
| 2024-01-16T06:59:58Z | 29 | 1 | null |
[
"gguf",
"code",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"dataset:sahil2801/CodeAlpaca-20k",
"arxiv:1910.09700",
"base_model:parsak/phi-2-code-instruct",
"base_model:quantized:parsak/phi-2-code-instruct",
"license:mit",
"region:us"
] |
text-generation
| 2024-01-16T06:47:16Z |
---
base_model: parsak/phi-2-code-instruct
datasets:
- sahil2801/CodeAlpaca-20k
inference: false
language:
- en
license: mit
model_creator: parsak
model_name: phi-2-code-instruct
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- code
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# parsak/phi-2-code-instruct-GGUF
Quantized GGUF model files for [phi-2-code-instruct](https://huggingface.co/parsak/phi-2-code-instruct) from [parsak](https://huggingface.co/parsak)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [phi-2-code-instruct.fp16.gguf](https://huggingface.co/afrideva/phi-2-code-instruct-GGUF/resolve/main/phi-2-code-instruct.fp16.gguf) | fp16 | 5.56 GB |
| [phi-2-code-instruct.q2_k.gguf](https://huggingface.co/afrideva/phi-2-code-instruct-GGUF/resolve/main/phi-2-code-instruct.q2_k.gguf) | q2_k | 1.11 GB |
| [phi-2-code-instruct.q3_k_m.gguf](https://huggingface.co/afrideva/phi-2-code-instruct-GGUF/resolve/main/phi-2-code-instruct.q3_k_m.gguf) | q3_k_m | 1.43 GB |
| [phi-2-code-instruct.q4_k_m.gguf](https://huggingface.co/afrideva/phi-2-code-instruct-GGUF/resolve/main/phi-2-code-instruct.q4_k_m.gguf) | q4_k_m | 1.74 GB |
| [phi-2-code-instruct.q5_k_m.gguf](https://huggingface.co/afrideva/phi-2-code-instruct-GGUF/resolve/main/phi-2-code-instruct.q5_k_m.gguf) | q5_k_m | 2.00 GB |
| [phi-2-code-instruct.q6_k.gguf](https://huggingface.co/afrideva/phi-2-code-instruct-GGUF/resolve/main/phi-2-code-instruct.q6_k.gguf) | q6_k | 2.29 GB |
| [phi-2-code-instruct.q8_0.gguf](https://huggingface.co/afrideva/phi-2-code-instruct-GGUF/resolve/main/phi-2-code-instruct.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Parsa K.]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [English, Python (Responses in other programming languages might be inconsistent)]
- **License:** [MIT]
- **Finetuned from model [optional]:** [[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tmnam20/bert-base-multilingual-cased-qqp-10
|
tmnam20
| 2024-01-16T06:59:29Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:58:07Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-cased-qqp-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QQP
type: tmnam20/VieGLUE
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8885975760573831
- name: F1
type: f1
value: 0.8473737716028464
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-qqp-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3064
- Accuracy: 0.8886
- F1: 0.8474
- Combined Score: 0.8680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3263 | 0.44 | 5000 | 0.3272 | 0.8557 | 0.8081 | 0.8319 |
| 0.3084 | 0.88 | 10000 | 0.2968 | 0.8680 | 0.8191 | 0.8436 |
| 0.2424 | 1.32 | 15000 | 0.2998 | 0.8768 | 0.8324 | 0.8546 |
| 0.2171 | 1.76 | 20000 | 0.2995 | 0.8847 | 0.8449 | 0.8648 |
| 0.1796 | 2.2 | 25000 | 0.3124 | 0.8857 | 0.8424 | 0.8640 |
| 0.1811 | 2.64 | 30000 | 0.2963 | 0.8883 | 0.8477 | 0.8680 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
thanhnew2001/starcoder-7b-taipy25
|
thanhnew2001
| 2024-01-16T06:59:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T04:53:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tmnam20/bert-base-multilingual-cased-vtoc-10
|
tmnam20
| 2024-01-16T06:58:06Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:56:57Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vtoc-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- name: Accuracy
type: accuracy
value: 0.8143091206990716
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vtoc-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6605
- Accuracy: 0.8143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4988 | 2.19 | 500 | 0.6809 | 0.8061 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-vtoc-100
|
tmnam20
| 2024-01-16T06:56:56Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:55:47Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vtoc-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- name: Accuracy
type: accuracy
value: 0.813216821409066
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vtoc-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6706
- Accuracy: 0.8132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4716 | 2.19 | 500 | 0.6870 | 0.8083 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-rte-100
|
tmnam20
| 2024-01-16T06:55:47Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:54:35Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-rte-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.7075812274368231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-rte-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6350
- Accuracy: 0.7076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-rte-10
|
tmnam20
| 2024-01-16T06:51:01Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:49:53Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-rte-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6498194945848376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-rte-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6733
- Accuracy: 0.6498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Seokeon/full_pp_berry_bowl
|
Seokeon
| 2024-01-16T06:50:27Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T04:57:00Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks bowl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/full_pp_berry_bowl
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks bowl using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
tmnam20/bert-base-multilingual-cased-sst2-10
|
tmnam20
| 2024-01-16T06:49:52Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:48:42Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-sst2-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/SST2
type: tmnam20/VieGLUE
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8841743119266054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-sst2-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4234
- Accuracy: 0.8842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4066 | 0.24 | 500 | 0.3869 | 0.8291 |
| 0.3414 | 0.48 | 1000 | 0.3499 | 0.8486 |
| 0.3133 | 0.71 | 1500 | 0.3743 | 0.8509 |
| 0.2797 | 0.95 | 2000 | 0.4119 | 0.8475 |
| 0.236 | 1.19 | 2500 | 0.3891 | 0.8670 |
| 0.2202 | 1.43 | 3000 | 0.3640 | 0.8739 |
| 0.1889 | 1.66 | 3500 | 0.3829 | 0.8681 |
| 0.1847 | 1.9 | 4000 | 0.3687 | 0.8796 |
| 0.1288 | 2.14 | 4500 | 0.4524 | 0.8807 |
| 0.1478 | 2.38 | 5000 | 0.4259 | 0.875 |
| 0.1761 | 2.61 | 5500 | 0.4060 | 0.8819 |
| 0.1487 | 2.85 | 6000 | 0.4408 | 0.8807 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
JaehwiJeon/videomae-base-finetuned-ucf101-subset
|
JaehwiJeon
| 2024-01-16T06:49:23Z | 48 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-01-16T06:13:31Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2485
- Accuracy: 0.9032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7587 | 0.25 | 75 | 1.2436 | 0.6714 |
| 0.9272 | 1.25 | 150 | 0.6259 | 0.7857 |
| 0.2074 | 2.25 | 225 | 0.4821 | 0.8429 |
| 0.2188 | 3.25 | 300 | 0.1336 | 0.9571 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-cola-10
|
tmnam20
| 2024-01-16T06:48:42Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:47:22Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- matthews_correlation
model-index:
- name: bert-base-multilingual-cased-cola-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/COLA
type: tmnam20/VieGLUE
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.1009230023823325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-cola-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6448
- Matthews Correlation: 0.1009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5762 | 1.87 | 500 | 0.6181 | 0.0372 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rudih-com/Llama-2-13b-chat-hf-fine-tuned
|
rudih-com
| 2024-01-16T06:44:34Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"sharded",
"fine-tuned",
"conversational",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T06:16:22Z |
---
language:
- en
pipeline_tag: text-generation
tags:
- llama
- llama-2
- sharded
- fine-tuned
---
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- sharded
---
# **llama-2-chat-7b-hf (sharded)**
This is a sharded version of Meta's Llama 2 chat 7b model, specifically the hugging face version.
All details below are copied from the original repo.
Colab notebook for sharding: https://colab.research.google.com/drive/1f1q9qc56wzB_7-bjgNyLlO6f28ui1esQ
Colab notebook for inference: https://colab.research.google.com/drive/1zxwaTSvd6PSHbtyaoa7tfedAS31j_N6m
## Inference with Google Colab and HuggingFace 🤗
Get started by saving your own copy of this [fLlama_Inference notebook](https://colab.research.google.com/drive/1Ow5cQ0JNv-vXsT-apCceH6Na3b4L7JyW?usp=sharing).
You will be able to run inference using a free Colab notebook if you select a gpu runtime. See the notebook for more details.
~
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
dagbs/laserxtral-GGUF
|
dagbs
| 2024-01-16T06:42:45Z | 40 | 7 | null |
[
"gguf",
"en",
"base_model:cognitivecomputations/laserxtral",
"base_model:quantized:cognitivecomputations/laserxtral",
"license:cc-by-nc-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T05:01:06Z |
---
license: cc-by-nc-2.0
base_model: cognitivecomputations/laserxtral
language:
- en
quantized_by: dagbs
---
# Laserxtral - 4x7b - GGUF
- Model creator(s): [David](https://huggingface.co/DavidGF), [Fernando](https://huggingface.co/fernandofernandes) and [Eric](https://huggingface.co/ehartford)
- Original model: [cognitivecomputations/laserxtral](https://huggingface.co/cognitivecomputations/laserxtral)

|
tmnam20/bert-base-multilingual-cased-wnli-100
|
tmnam20
| 2024-01-16T06:42:07Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:40:49Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-wnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/WNLI
type: tmnam20/VieGLUE
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5352112676056338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-wnli-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Accuracy: 0.5352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.