modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 18:59:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 551
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 18:27:33
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jlvdoorn/whisper-large-v2-atco2-asr
|
jlvdoorn
| 2024-01-16T10:03:04Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"doi:10.57967/hf/1376",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-15T15:16:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v2-atco2-asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-atco2-asr
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7915
- Wer: 18.7722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 2800
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1333 | 3.57 | 100 | 0.5298 | 21.8861 |
| 0.0338 | 7.14 | 200 | 0.5430 | 18.8167 |
| 0.0132 | 10.71 | 300 | 0.5830 | 17.9270 |
| 0.0067 | 14.29 | 400 | 0.6011 | 17.6157 |
| 0.0009 | 17.86 | 500 | 0.6582 | 18.8167 |
| 0.0004 | 21.43 | 600 | 0.6743 | 18.7722 |
| 0.0003 | 25.0 | 700 | 0.6919 | 18.4609 |
| 0.0004 | 28.57 | 800 | 0.6943 | 26.6459 |
| 0.0004 | 32.14 | 900 | 0.7090 | 18.5053 |
| 0.0002 | 35.71 | 1000 | 0.7212 | 18.8167 |
| 0.0001 | 39.29 | 1100 | 0.7305 | 18.8612 |
| 0.0001 | 42.86 | 1200 | 0.7383 | 18.6388 |
| 0.0001 | 46.43 | 1300 | 0.7451 | 18.5498 |
| 0.0001 | 50.0 | 1400 | 0.7515 | 18.5498 |
| 0.0001 | 53.57 | 1500 | 0.7573 | 18.5498 |
| 0.0001 | 57.14 | 1600 | 0.7622 | 18.5943 |
| 0.0001 | 60.71 | 1700 | 0.7666 | 18.5943 |
| 0.0001 | 64.29 | 1800 | 0.7705 | 18.5498 |
| 0.0001 | 67.86 | 1900 | 0.7744 | 18.6833 |
| 0.0001 | 71.43 | 2000 | 0.7778 | 18.6833 |
| 0.0001 | 75.0 | 2100 | 0.7808 | 18.7278 |
| 0.0001 | 78.57 | 2200 | 0.7837 | 18.6833 |
| 0.0001 | 82.14 | 2300 | 0.7856 | 18.6388 |
| 0.0001 | 85.71 | 2400 | 0.7881 | 18.6833 |
| 0.0001 | 89.29 | 2500 | 0.7896 | 18.6388 |
| 0.0001 | 92.86 | 2600 | 0.7905 | 18.7278 |
| 0.0001 | 96.43 | 2700 | 0.7915 | 18.8167 |
| 0.0001 | 100.0 | 2800 | 0.7915 | 18.7722 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Seokeon/V14_R256_lora_pp_grey_sloth_plushie
|
Seokeon
| 2024-01-16T10:02:38Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:57:40Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks stuffed animal
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R256_lora_pp_grey_sloth_plushie
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Amir20039/TestPublicationASU
|
Amir20039
| 2024-01-16T10:01:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-16T10:00:11Z |
# Streamlit RecSys
## Demo
[Streamlit Cloud](https://movie-recsys.streamlit.app)
## RoadMap Задач
**Далее**
- [x] Добавить альтернативную сортировку, ранкинг, по популярности фильма (`popularity`)
- [x] Добавить `selectbox()` для выбора типа модели и подключить к коду приложения
- [ ] Составить выборку с оценками фильмов пользователями (добавить загрузку данных по ссылке и адаптировать к новой структуре)
- [ ] GitHub репозиторий проекта, с последующим merge request от каждого студента
- [ ] Добавить идентификацию пользователей (своя реализация или `streamlit credentials`)
- [ ] Сохранять выбор пользователя (`user_login`, `user_password`, `added_movies`)
- [ ] Сохранять рейтинг выбранных пользователем фильмов (`st.slider` или `st.number_input`)
- [ ] Модель на основе предпочтений пользователей (по списку фильмов, после и по рейтингу)
- [ ] Деплой приложения в облачный сервис (streamlit-hub, hf spaces, heroku)
---
**Если успеем**
- [ ] Добавить метрики качества рекомендаций для модели по пользователям
- [ ] ФидБек рекомендаций от пользователей
- [ ] A/B тестирование на пользователях (сами сделаем или попросим студентов)
---
**Реализовано**
- [x] Исправить проблему со ссылками на постеры
- [x] Базовая модель коллаборативной фильтрации с простой сортировкой, ранкингом, по `cos_sim score`
- [x] Исправить проблему с выводом выбранных фильмов в рекомендациях модели
## GitHub Workflow
1. Fork репозитория https://github.com/valeriylo/RecSys_ML_ASU в свой аккаунт (кнопка Fork в правом верхнем углу)
2. Склонировать свой репозиторий на локальную машину командой `git clone` (предварительно установив `git`)
3. Выбрать ветку `students` командой `git checkout students` или в IDE выбрать нужную ветку
4. Инициализировать виртуальное окружение командой `python -m venv venv` (предварительно установив `python`)
5. Активировать виртуальное окружение командой `source venv/bin/activate` (для Windows `source venv/Scripts/activate`)
6. Установить зависимости командой `pip install -r requirements.txt`
7. Внеся изменения в код, добавить их в индекс командой `git add <file_name>` или в IDE выбрать нужные файлы
8. Сделать коммит изменений командой `git commit -m "commit message"` или в IDE указать комментарий
9. Отправить изменения в свой репозиторий командой `git push origin students` или в IDE выбрать нужную ветку
10. В своем репозитории на GitHub создать Merge Request в ветку `students` репозитория
11. После слияния изменений в ветку `students` основного репозитория, сделать `git pull origin students` или в IDE обновить ветку
12. Повторять шаги 7-11
## Prerequisite
- Python 3.8+
## Installation
```
pip install -r requirements.txt
```
## How to Run App locally
```
streamlit run main.py
```
|
Seokeon/V14_R384_lora_none_dog8
|
Seokeon
| 2024-01-16T09:57:28Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:54:40Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_dog8
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
facebook/audio-magnet-small
|
facebook
| 2024-01-16T09:57:18Z | 222 | 8 |
audiocraft
|
[
"audiocraft",
"magnet",
"text-to-audio",
"arxiv:2401.04577",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2024-01-10T20:16:04Z |
---
inference: true
tags:
- magnet
- audiocraft
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
---
# Audio-MAGNeT - Small - 300M
MAGNeT is a text-to-music and text-to-sound model capable of generating high-quality audio samples conditioned on text descriptions.
It is a masked generative non-autoregressive Transformer trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike prior work, MAGNeT doesn't require neither semantic token conditioning nor model cascading, and it generates all 4 codebooks using a single non-autoregressive Transformer.
MAGNeT was published in [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577) by *Alon Ziv, Itai Gat, Gael Le Lan, Tal Remez, Felix Kreuk, Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi*.
Six checkpoints are released:
- [small-10secs](https://huggingface.co/facebook/magnet-small-10secs)
- [medium-10secs](https://huggingface.co/facebook/magnet-medium-10secs)
- [small-30secs](https://huggingface.co/facebook/magnet-small-30secs)
- [medium-30secs](https://huggingface.co/facebook/magnet-medium-30secs)
- [**audio-small** (this checkpoint)](https://huggingface.co/facebook/audio-magnet-small)
- [audio-medium](https://huggingface.co/facebook/audio-magnet-medium)
## 🤗 Transformers Usage
Coming soon...
## Audiocraft Usage
You can run MAGNeT locally through the original [Audiocraft library](https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MAGNeT
from audiocraft.data.audio import audio_write
model = MAGNeT.get_pretrained("facebook/audio-magnet-small")
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MAGNeT was trained between November 2023 and January 2024.
**Model version:** This is the version 1 of the model.
**Model type:** MAGNeT consists of an EnCodec model for audio tokenization, an non-autoregressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B; and two variants: a model trained for text-to-music generation task and a model trained for text-to-audio generation.
**Paper or resources for more information:** More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577).
**Citation details:**
```
@misc{ziv2024masked,
title={Masked Audio Generation using a Single Non-Autoregressive Transformer},
author={Alon Ziv and Itai Gat and Gael Le Lan and Tal Remez and Felix Kreuk and Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2024},
eprint={2401.04577},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MAGNeT can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MAGNeT is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we used the state-of-the-art music source separation method,
namely the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs),
in order to keep only instrumental tracks. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency |
|---|---|---|---|
| facebook/magnet-small-10secs | 4.22 | 1.11 | 0.28 |
| facebook/magnet-medium-10secs | 4.61 | 1.14 | 0.28 |
| facebook/magnet-small-30secs | 4.35 | 1.17 | 0.28 |
| facebook/magnet-medium-30secs | 4.63 | 1.20 | 0.28 |
More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 16K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Tracks that include vocals have been removed from the data source using corresponding tags, and using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MAGNeT is a model developed for artificial intelligence research on music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
## Audio-MAGNeT - Sound-effect generation models
### Training datasets
The audio-magnet models were trained on the following data sources: a subset of AudioSet (Gemmeke et al., 2017), [BBC sound effects](https://sound-effects.bbcrewind.co.uk/), AudioCaps (Kim et al., 2019), Clotho v2 (Drossos et al., 2020), VGG-Sound (Chen et al., 2020), FSD50K (Fonseca et al., 2021), [Free To Use Sounds](https://www.freetousesounds.com/all-in-one-bundle/), [Sonniss Game Effects](https://sonniss.com/gameaudiogdc), [WeSoundEffects](https://wesoundeffects.com/we-sound-effects-bundle-2020/), [Paramount Motion - Odeon Cinematic Sound Effects](https://www.paramountmotion.com/odeon-sound-effects).
### Evaluation datasets
The audio-magnet models (sound effect generation) were evaluated on the [AudioCaps benchmark](https://audiocaps.github.io/).
### Evaluation results
Below are the objective metrics obtained with the released audio-magnet models on AudioCaps (consisting of 10-second long samples).
| Model | Frechet Audio Distance | KLD |
|---|---|---|
| **facebook/audio-magnet-small** | **3.21** | **1.42** |
| facebook/audio-magnet-medium | 2.32 | 1.64 |
|
facebook/magnet-small-10secs
|
facebook
| 2024-01-16T09:56:14Z | 1,223 | 21 |
audiocraft
|
[
"audiocraft",
"magnet",
"text-to-audio",
"arxiv:2401.04577",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2024-01-10T14:49:45Z |
---
inference: false
tags:
- magnet
- audiocraft
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
widget:
- text: "a funky house with 80s hip hop vibes"
example_title: "Prompt 1"
- text: "a chill song with influences from lofi, chillstep and downtempo"
example_title: "Prompt 2"
- text: "a catchy beat for a podcast intro"
example_title: "Prompt 3"
---
# MAGNeT - Small - 300M - 10secs
MAGNeT is a text-to-music and text-to-sound model capable of generating high-quality audio samples conditioned on text descriptions.
It is a masked generative non-autoregressive Transformer trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike prior work, MAGNeT doesn't require neither semantic token conditioning nor model cascading, and it generates all 4 codebooks using a single non-autoregressive Transformer.
MAGNeT was published in [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577) by *Alon Ziv, Itai Gat, Gael Le Lan, Tal Remez, Felix Kreuk, Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi*.
Six checkpoints are released:
- [**small-10secs** (this checkpoint)](https://huggingface.co/facebook/magnet-small-10secs)
- [medium-10secs](https://huggingface.co/facebook/magnet-medium-10secs)
- [small-30secs](https://huggingface.co/facebook/magnet-small-30secs)
- [medium-30secs](https://huggingface.co/facebook/magnet-medium-30secs)
- [audio-small](https://huggingface.co/facebook/audio-magnet-small)
- [audio-medium](https://huggingface.co/facebook/audio-magnet-medium)
## 🤗 Transformers Usage
Coming soon...
## Audiocraft Usage
You can run MAGNeT locally through the original [Audiocraft library](https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MAGNeT
from audiocraft.data.audio import audio_write
model = MAGNeT.get_pretrained("facebook/magnet-small-10secs")
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MAGNeT was trained between November 2023 and January 2024.
**Model version:** This is the version 1 of the model.
**Model type:** MAGNeT consists of an EnCodec model for audio tokenization, an non-autoregressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B; and two variants: a model trained for text-to-music generation task and a model trained for text-to-audio generation.
**Paper or resources for more information:** More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577).
**Citation details:**
```
@misc{ziv2024masked,
title={Masked Audio Generation using a Single Non-Autoregressive Transformer},
author={Alon Ziv and Itai Gat and Gael Le Lan and Tal Remez and Felix Kreuk and Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2024},
eprint={2401.04577},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MAGNeT can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MAGNeT is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we used the state-of-the-art music source separation method,
namely the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs),
in order to keep only instrumental tracks. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency |
|---|---|---|---|
| **facebook/magnet-small-10secs** | **4.22** | **1.11** | **0.28** |
| facebook/magnet-medium-10secs | 4.61 | 1.14 | 0.28 |
| facebook/magnet-small-30secs | 4.35 | 1.17 | 0.28 |
| facebook/magnet-medium-30secs | 4.63 | 1.20 | 0.28 |
More information can be found in the paper [Masked Audio Generation using a Single Non-Autoregressive Transformer](https://arxiv.org/abs/2401.04577), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 16K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Tracks that include vocals have been removed from the data source using corresponding tags, and using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MAGNeT is a model developed for artificial intelligence research on music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
## Audio-MAGNeT - Sound-effect generation models
### Training datasets
The audio-magnet models were trained on the following data sources: a subset of AudioSet (Gemmeke et al., 2017), [BBC sound effects](https://sound-effects.bbcrewind.co.uk/), AudioCaps (Kim et al., 2019), Clotho v2 (Drossos et al., 2020), VGG-Sound (Chen et al., 2020), FSD50K (Fonseca et al., 2021), [Free To Use Sounds](https://www.freetousesounds.com/all-in-one-bundle/), [Sonniss Game Effects](https://sonniss.com/gameaudiogdc), [WeSoundEffects](https://wesoundeffects.com/we-sound-effects-bundle-2020/), [Paramount Motion - Odeon Cinematic Sound Effects](https://www.paramountmotion.com/odeon-sound-effects).
### Evaluation datasets
The audio-magnet models (sound effect generation) were evaluated on the [AudioCaps benchmark](https://audiocaps.github.io/).
### Evaluation results
Below are the objective metrics obtained with the released audio-magnet models on AudioCaps (consisting of 10-second long samples).
| Model | Frechet Audio Distance | KLD |
|---|---|---|
| facebook/audio-magnet-small | 3.21 | 1.42 |
| facebook/audio-magnet-medium | 2.32 | 1.64 |
|
Seokeon/V14_R384_lora_none_dog6
|
Seokeon
| 2024-01-16T09:54:23Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:51:35Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_dog6
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
racheltong/va_openai-whisper-small-en-colab_6000_0.001_4
|
racheltong
| 2024-01-16T09:52:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T09:52:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Seokeon/V14_R384_lora_pp_dog2
|
Seokeon
| 2024-01-16T09:52:24Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:46:18Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_pp_dog2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_R384_lora_none_dog2
|
Seokeon
| 2024-01-16T09:51:15Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:48:29Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_dog2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_R256_lora_pp_dog2
|
Seokeon
| 2024-01-16T09:50:06Z | 3 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:46:25Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R256_lora_pp_dog2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_R384_lora_none_cat
|
Seokeon
| 2024-01-16T09:48:05Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:45:17Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_R384_lora_none_cat
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
devesh123098/q-FrozenLake-v1-4x4-noSlippery
|
devesh123098
| 2024-01-16T09:43:00Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-16T09:42:55Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="devesh123098/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shrideep/Retrieval_Augmented_Generation
|
Shrideep
| 2024-01-16T09:36:43Z | 0 | 1 | null |
[
"RAG",
"Retrieval Augmented Generation",
"llama-index",
"en",
"dataset:chromadb/paul_graham_essay",
"region:us"
] | null | 2024-01-16T07:35:16Z |
---
datasets:
- chromadb/paul_graham_essay
language:
- en
tags:
- RAG
- Retrieval Augmented Generation
- llama-index
---
# Summary:
Retrieval Augmented Generation (RAG) is a technique to specialize a language model with a specific knowledge domain by feeding in relevant data so that it can give better answers.
# How does RAG works?
1. Ready/ Preprocess your input data i.e. tokenization & vectorization
2. Feed the processed data to the Language Model.
3. Indexing the stored data that matches the context of the query.
# Implementing RAG with llama-index
### 1. Load relevant data and build an index
from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
### 2. Query your data
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
# My application of RAG on ChatGPT
Check RAG.ipynb
|
Federic/lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
|
Federic
| 2024-01-16T09:35:00Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-15T10:32:00Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
model-index:
- name: lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-fine-tuning-llama2-SQL-lora-1000-2-dataset-size-open-hermes
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-exl2-5.0bpw
|
notstoic
| 2024-01-16T09:31:59Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T09:26:36Z |
---
base_model: []
tags:
- mergekit
- merge
---
# Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-exl2-5.0bpw
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
An experimental merge.
Prompt format: ChatML or Mixtral-8x7B-Instruct-v0.1
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./models/Mixtral-8x7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
* [Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./models/Mixtral-8x7B-Instruct-v0.1
parameters:
density: 0.5
weight: 1.0
- model: ./models/Nous-Hermes-2-Mixtral-8x7B-DPO
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: ./models/Mixtral-8x7B-v0.1
parameters:
#normalize: false
#int8_mask: true
dtype: bfloat16
```
|
Seokeon/V14_lora_none_berry_bowl
|
Seokeon
| 2024-01-16T09:31:33Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:27:45Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks bowl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_berry_bowl
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks bowl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
567-labs/bge-base-en-v1.5-ft-quora-0.9
|
567-labs
| 2024-01-16T09:30:58Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-16T09:30:47Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7960 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
liamhvn/realistic-vision-v51
|
liamhvn
| 2024-01-16T09:30:22Z | 14 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-27T06:09:38Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Realistic Vision V5.1 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-v51"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/realistic-vision-v51)
Model link: [View model](https://stablediffusionapi.com/models/realistic-vision-v51)
Credits: [View credits](https://civitai.com/?query=Realistic%20Vision%20V5.1)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-v51",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Seokeon/V14_lora_none_bear_plushie
|
Seokeon
| 2024-01-16T09:27:26Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:23:10Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks stuffed animal
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_bear_plushie
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Seokeon/V14_lora_none_monster_toy
|
Seokeon
| 2024-01-16T09:22:51Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:19:00Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_monster_toy
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ssssseeee/my_awesome_billsum_model
|
ssssseeee
| 2024-01-16T09:18:10Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:lcw99/t5-base-korean-text-summary",
"base_model:finetune:lcw99/t5-base-korean-text-summary",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-16T08:34:55Z |
---
base_model: lcw99/t5-base-korean-text-summary
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [lcw99/t5-base-korean-text-summary](https://huggingface.co/lcw99/t5-base-korean-text-summary) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1454
- Rouge1: 0.1698
- Rouge2: 0.0688
- Rougel: 0.1623
- Rougelsum: 0.1632
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.1729 | 0.1723 | 0.072 | 0.1654 | 0.1656 | 19.0 |
| 1.4585 | 2.0 | 990 | 1.1454 | 0.1698 | 0.0688 | 0.1623 | 0.1632 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-wnli-100
|
tmnam20
| 2024-01-16T09:15:13Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T09:12:42Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-wnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/WNLI
type: tmnam20/VieGLUE
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-wnli-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6890
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
carinnew/STdictabert-cls
|
carinnew
| 2024-01-16T09:14:52Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-16T09:14:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1229 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 50,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
pborchert/bert-ic
|
pborchert
| 2024-01-16T09:14:34Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"industry classification",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-12T10:04:55Z |
---
license: cc-by-4.0
language:
- en
pipeline_tag: fill-mask
tags:
- bert
- industry classification
library_name: transformers
widget:
- text: "Sanofi is in the [MASK] industry."
- text: "The current ratio measures [MASK]."
---
|
Seokeon/V14_full_pp_cat
|
Seokeon
| 2024-01-16T09:14:25Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T08:41:51Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/V14_full_pp_cat
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
carinnew/STdictabert-max
|
carinnew
| 2024-01-16T09:14:19Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-16T09:13:57Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
return torch.max(token_embeddings, 1)[0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1229 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 50,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
carinnew/STdictabert-mean
|
carinnew
| 2024-01-16T09:13:28Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-07T11:28:00Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1229 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 50,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
tmnam20/mdeberta-v3-base-wnli-10
|
tmnam20
| 2024-01-16T09:12:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T09:10:29Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-wnli-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/WNLI
type: tmnam20/VieGLUE
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-wnli-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6899
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
QiaoyuZheng/RP3D-DiagModel
|
QiaoyuZheng
| 2024-01-16T09:09:19Z | 0 | 3 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-12-30T08:01:13Z |
---
license: apache-2.0
---
# RP3D-DiagModel
## About Checkpoint
The detailed parameter we use for training is in the following:
```
start_class: 0
end_clas: 5569
backbone: 'resnet'
level: 'articles' # represents the disorder level
depth: 32
ltype: 'MultiLabel' # represents the Binary Cross Entropy Loss
augment: True # represents the medical data augmentation
split: 'late' # represents the late fusion strategy
```
### Load Model
```
# Load backnone
model = RadNet(num_cls=num_classes, backbone=backbone, depth=depth, ltype=ltype, augment=augment, fuse=fuse, ke=ke, encoded=encoded, adapter=adapter)
pretrained_weights = torch.load("path/to/pytorch_model_32_late.bin")
missing, unexpect = model.load_state_dict(pretrained_weights,strict=False)
print("missing_cpt:", missing)
print("unexpect_cpt:", unexpect)
# If KE is set True, load text encoder
medcpt = MedCPT_clinical(bert_model_name = 'ncbi/MedCPT-Query-Encoder')
checkpoint = torch.load('path/to/epoch_state.pt',map_location='cpu')['state_dict']
load_checkpoint = {key.replace('module.', ''): value for key, value in checkpoint.items()}
missing, unexpect = medcpt.load_state_dict(load_checkpoint, strict=False)
print("missing_cpt:", missing)
print("unexpect_cpt:", unexpect)
```
## Why we provide this checkpoint?
All the early fusion checkpoint can be further finetuned from this checkpoint. If you need other checkpoints using different parameter settings, there are two possible ways:
### Finetune from this checkpoint
'''
checkpoint: "None"
safetensor: path to this checkpoint(pytorch_model.bin)
'''
### Contact Us
Email the author: three-world@sjtu.edu.cn
## About Dataset
Please refer to [RP3D-DiagDS](https://huggingface.co/datasets/QiaoyuZheng/RP3D-DiagDS)
For more information, please refer to our instructions on [github](https://github.com/qiaoyu-zheng/RP3D-Diag) to download and use.
|
Namitoo/verify_01
|
Namitoo
| 2024-01-16T09:09:04Z | 0 | 1 |
transformers
|
[
"transformers",
"就",
"甲方",
"jd ",
"hf",
"happy",
"model",
"bert",
"gpt",
"moe",
"mixture",
"didi",
"dasfj ",
"sdjfka ",
"sadjf ",
"83294",
"jdksafj ",
"ch",
"dn",
"en",
"gi",
"ds ",
"data",
"ana",
"lysis",
"dj ",
"image-to-video",
"zh",
"dataset:fka/awesome-chatgpt-prompts",
"model-index",
"endpoints_compatible",
"region:us"
] |
image-to-video
| 2023-12-27T03:50:36Z |
---
pipeline_tag: image-to-video
metrics:
- bertscore
- accuracy
base_model: dudunamito
model-index:
- name: StarCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
split: test
revision: 123i23478283482904832482904890
metrics:
- name: pass@1
type: pass@1
value: 0.408
verified: true
- task:
type: tabular
dataset:
type: vc
name: HumanEval
metrics:
- name: +++
type: jj
value: 123@--------------faskiwru qjaskjf 123i23478283482904832482904890123@--------------faskiwru qjaskjf 123i23478283482904832482904890------------faskiwru qjaskjf 123i23478283482904832482904890
verified: false
language:
- zh
- en
tags:
- 就
- 甲方
- 'jd '
- hf
- happy
- model
- bert
- gpt
- transformers
- moe
- mixture
- didi
- 'dasfj '
- 'sdjfka '
- 'sadjf '
- '83294'
- 'jdksafj '
- ch
- dn
- en
- gi
- 'ds '
- data
- ana
- lysis
- 'dj '
datasets:
- fka/awesome-chatgpt-prompts
- fka/awesome-chatgpt-prompts
- fka/awesome-chatgpt-prompts
- fka/awesome-chatgpt-prompts
- fka/awesome-chatgpt-prompts
---
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
chelouche9/my-awesome-adapter
|
chelouche9
| 2024-01-16T09:08:56Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:sentiment/rotten_tomatoes",
"roberta",
"dataset:rotten_tomatoes",
"region:us"
] | null | 2024-01-16T09:08:55Z |
---
tags:
- adapter-transformers
- adapterhub:sentiment/rotten_tomatoes
- roberta
datasets:
- rotten_tomatoes
---
# Adapter `chelouche9/my-awesome-adapter` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("chelouche9/my-awesome-adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Seokeon/V14_lora_none_dog2
|
Seokeon
| 2024-01-16T09:05:49Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T09:02:01Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_dog2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
tmnam20/mdeberta-v3-base-vtoc-10
|
tmnam20
| 2024-01-16T09:05:25Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T09:02:51Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vtoc-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- name: Accuracy
type: accuracy
value: 0.8088476242490442
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vtoc-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7381
- Accuracy: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7432 | 2.19 | 500 | 0.7743 | 0.7963 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Seokeon/V14_lora_none_cat
|
Seokeon
| 2024-01-16T09:01:43Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T08:57:56Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_cat
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
tmnam20/mdeberta-v3-base-vsmec-100
|
tmnam20
| 2024-01-16T09:00:17Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:57:53Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vsmec-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSMEC
type: tmnam20/VieGLUE
config: vsmec
split: validation
args: vsmec
metrics:
- name: Accuracy
type: accuracy
value: 0.5539358600583091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsmec-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSMEC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2296
- Accuracy: 0.5539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0733 | 2.87 | 500 | 1.2329 | 0.5510 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Yi-34Bx2-MoE-60B-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-16T08:59:57Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T08:44:22Z |
---
license: cc-by-nc-4.0
---
# Yi based MOE 2x34B with mixtral architecture
Highest score Model ranked by Open LLM Leaderboard (2024-01-11)
* [Average Score 76.72](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is an English & Chinese MoE Model , slightly different with [cloudyu/Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B), and also based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
tmnam20/mdeberta-v3-base-vsmec-10
|
tmnam20
| 2024-01-16T08:57:53Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:55:22Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vsmec-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSMEC
type: tmnam20/VieGLUE
config: vsmec
split: validation
args: vsmec
metrics:
- name: Accuracy
type: accuracy
value: 0.5364431486880467
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsmec-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSMEC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3020
- Accuracy: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1704 | 2.87 | 500 | 1.3027 | 0.5335 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-vsmec-1
|
tmnam20
| 2024-01-16T08:55:22Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:52:46Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vsmec-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSMEC
type: tmnam20/VieGLUE
config: vsmec
split: validation
args: vsmec
metrics:
- name: Accuracy
type: accuracy
value: 0.5335276967930029
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vsmec-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSMEC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2431
- Accuracy: 0.5335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0725 | 2.87 | 500 | 1.2408 | 0.5408 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Seokeon/V14_lora_none_rc_car
|
Seokeon
| 2024-01-16T08:53:22Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-16T07:53:55Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Seokeon/V14_lora_none_rc_car
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
tmnam20/mdeberta-v3-base-vnrte-100
|
tmnam20
| 2024-01-16T08:44:53Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:41:59Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vnrte-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VNRTE
type: tmnam20/VieGLUE
config: vnrte
split: validation
args: vnrte
metrics:
- name: Accuracy
type: accuracy
value: 0.9987248963978324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vnrte-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VNRTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0063
- Accuracy: 0.9987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0031 | 1.28 | 500 | 0.0002 | 1.0 |
| 0.0002 | 2.55 | 1000 | 0.0011 | 0.9997 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rccmsu/ruadapt_llama2_7b_v0.1
|
rccmsu
| 2024-01-16T08:42:30Z | 265 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ru",
"arxiv:2312.02598",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T10:42:14Z |
---
license: llama2
language:
- ru
metrics:
- accuracy
---
# ruadapt_llama2_7b_v0.1
This model is a fine-tuned (embeddings, lm head) version of TheBloke/Llama-2-7B-fp16 on the Russian dataset (33GB).
It achieves the following results on the evaluation set:
- Loss: 2.7569
- Accuracy: 0.4617
Instruct version:
https://huggingface.co/rccmsu/ruadapt_saiga2_7b_v0.1
## Model description
Russian adaptation of LLaMa-2-7B by replacing the tokenizer.
Paper: Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //arXiv preprint arXiv:2312.02598. – 2023.
## Intended uses & limitations
LLAMA 2 COMMUNITY LICENSE AGREEMENT
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- total_eval_batch_size: 96
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: linear
- num_epochs: 2.0
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
tmnam20/mdeberta-v3-base-vnrte-1
|
tmnam20
| 2024-01-16T08:40:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:38:09Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-vnrte-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VNRTE
type: tmnam20/VieGLUE
config: vnrte
split: validation
args: vnrte
metrics:
- name: Accuracy
type: accuracy
value: 0.999681224099458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-vnrte-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VNRTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0024
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0077 | 1.28 | 500 | 0.0007 | 0.9997 |
| 0.0043 | 2.55 | 1000 | 0.0025 | 0.9997 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES
|
notstoic
| 2024-01-16T08:39:32Z | 9 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T08:22:17Z |
---
base_model: []
tags:
- mergekit
- merge
---
# Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
An experimental merge.
Prompt format: ChatML or Mixtral-8x7B-Instruct-v0.1
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./models/Mixtral-8x7B-v0.1 as a base.
### Models Merged
The following models were included in the merge:
* [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
* [Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./models/Mixtral-8x7B-Instruct-v0.1
parameters:
density: 0.5
weight: 1.0
- model: ./models/Nous-Hermes-2-Mixtral-8x7B-DPO
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: ./models/Mixtral-8x7B-v0.1
parameters:
#normalize: false
#int8_mask: true
dtype: bfloat16
```
|
Seokeon/full_pp_robot_toy
|
Seokeon
| 2024-01-16T08:38:54Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T07:59:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/full_pp_robot_toy
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
wcyat/whisper-small-yue-hk-retrained
|
wcyat
| 2024-01-16T08:38:35Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:wcyat/whisper-small-yue-hk-retrained-1",
"base_model:finetune:wcyat/whisper-small-yue-hk-retrained-1",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-10T12:50:13Z |
---
base_model: wcyat/whisper-small-yue-hk-retrained-1
tags:
- generated_from_trainer
model-index:
- name: whisper-small-yue-hk-retrained-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-yue-hk-retrained-2
This model is a fine-tuned version of [wcyat/whisper-small-yue-hk-retrained-1](https://huggingface.co/wcyat/whisper-small-yue-hk-retrained-1) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2631
- eval_cer: 12.5099
- eval_runtime: 4014.1159
- eval_samples_per_second: 2.037
- eval_steps_per_second: 0.127
- epoch: 0.81
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/Kunoichi-DPO-v2-7B-GGUF
|
s3nh
| 2024-01-16T08:36:36Z | 37 | 8 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T08:07:12Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
I've been trying to learn about Quantization in deep learning, but I find the concepts very abstract and hard to grasp. Can you please tell me a simpler way to understand it? Also, could you give me an example of when and how we might need to use quantization while building a model?
Answer: Sure! Let's think about quantization as converting continuous data into discrete (or finite) values. In the context of deep learning, this usually refers to converting the floating-point weights in our neural network models into integers.
Imagine you have a drawing with millions of
# Original model card
|
Azam/corgy_dog_LoRA
|
Azam
| 2024-01-16T08:34:08Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-16T07:17:16Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - Azam/corgy_dog_LoRA
<Gallery />
## Model description
These are Azam/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Azam/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
tmnam20/mdeberta-v3-base-rte-100
|
tmnam20
| 2024-01-16T08:32:41Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:30:50Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-rte-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6931407942238267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-rte-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6043
- Accuracy: 0.6931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jvh/Mistral-asst_top1_2023-GEITje
|
jvh
| 2024-01-16T08:31:48Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1",
"base_model:merge:NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1",
"base_model:Rijgersberg/GEITje-7B-chat-v2",
"base_model:merge:Rijgersberg/GEITje-7B-chat-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-15T17:27:37Z |
---
base_model:
- Rijgersberg/GEITje-7B-chat-v2
- NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Rijgersberg/GEITje-7B-chat-v2](https://huggingface.co/Rijgersberg/GEITje-7B-chat-v2)
* [NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1](https://huggingface.co/NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Rijgersberg/GEITje-7B-chat-v2
layer_range: [0, 32]
- model: NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1
layer_range: [0, 32]
merge_method: slerp
base_model: Rijgersberg/GEITje-7B-chat-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
tmnam20/mdeberta-v3-base-rte-10
|
tmnam20
| 2024-01-16T08:30:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:28:57Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-rte-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6931407942238267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-rte-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5885
- Accuracy: 0.6931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-rte-1
|
tmnam20
| 2024-01-16T08:28:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:27:06Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-rte-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6353790613718412
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-rte-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6495
- Accuracy: 0.6354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-qqp-100
|
tmnam20
| 2024-01-16T08:27:06Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:25:04Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: mdeberta-v3-base-qqp-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QQP
type: tmnam20/VieGLUE
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8987880286915657
- name: F1
type: f1
value: 0.8654655444502892
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-qqp-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2790
- Accuracy: 0.8988
- F1: 0.8655
- Combined Score: 0.8821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3099 | 0.44 | 5000 | 0.2921 | 0.8751 | 0.8326 | 0.8539 |
| 0.269 | 0.88 | 10000 | 0.2732 | 0.8820 | 0.8378 | 0.8599 |
| 0.2421 | 1.32 | 15000 | 0.2795 | 0.8894 | 0.8520 | 0.8707 |
| 0.2198 | 1.76 | 20000 | 0.2674 | 0.8937 | 0.8566 | 0.8751 |
| 0.188 | 2.2 | 25000 | 0.2778 | 0.8964 | 0.8602 | 0.8783 |
| 0.1916 | 2.64 | 30000 | 0.2861 | 0.8977 | 0.8636 | 0.8807 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Yi-34Bx2-MoE-60B-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-16T08:22:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T08:10:22Z |
---
license: cc-by-nc-4.0
---
# Yi based MOE 2x34B with mixtral architecture
Highest score Model ranked by Open LLM Leaderboard (2024-01-11)
* [Average Score 76.72](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is an English & Chinese MoE Model , slightly different with [cloudyu/Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B), and also based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
tmnam20/mdeberta-v3-base-qnli-100
|
tmnam20
| 2024-01-16T08:21:23Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:19:38Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-qnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QNLI
type: tmnam20/VieGLUE
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8974922203917262
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-qnli-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2906
- Accuracy: 0.8975
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3773 | 0.15 | 500 | 0.3870 | 0.8431 |
| 0.3547 | 0.31 | 1000 | 0.3175 | 0.8658 |
| 0.3385 | 0.46 | 1500 | 0.2986 | 0.8739 |
| 0.342 | 0.61 | 2000 | 0.2787 | 0.8845 |
| 0.3003 | 0.76 | 2500 | 0.3075 | 0.8726 |
| 0.3298 | 0.92 | 3000 | 0.2781 | 0.8807 |
| 0.2475 | 1.07 | 3500 | 0.2695 | 0.8942 |
| 0.2441 | 1.22 | 4000 | 0.2615 | 0.8940 |
| 0.249 | 1.37 | 4500 | 0.2548 | 0.8958 |
| 0.2261 | 1.53 | 5000 | 0.2588 | 0.8946 |
| 0.2348 | 1.68 | 5500 | 0.2587 | 0.8982 |
| 0.2626 | 1.83 | 6000 | 0.2581 | 0.8982 |
| 0.2463 | 1.99 | 6500 | 0.2520 | 0.8964 |
| 0.1768 | 2.14 | 7000 | 0.2795 | 0.8951 |
| 0.1768 | 2.29 | 7500 | 0.3069 | 0.8942 |
| 0.1752 | 2.44 | 8000 | 0.2783 | 0.8971 |
| 0.1687 | 2.6 | 8500 | 0.2900 | 0.8995 |
| 0.163 | 2.75 | 9000 | 0.2828 | 0.8969 |
| 0.1547 | 2.9 | 9500 | 0.2873 | 0.8980 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
greymatter-2024/tinyllama2_finetuned_chatbot_hey
|
greymatter-2024
| 2024-01-16T08:21:17Z | 15 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-01-16T05:53:18Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tinyllama2_finetuned_chatbot_hey
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama2_finetuned_chatbot_hey
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tmnam20/mdeberta-v3-base-qnli-10
|
tmnam20
| 2024-01-16T08:19:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:17:51Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-qnli-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QNLI
type: tmnam20/VieGLUE
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8984074684239429
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-qnli-10
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2859
- Accuracy: 0.8984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3968 | 0.15 | 500 | 0.3264 | 0.8623 |
| 0.3826 | 0.31 | 1000 | 0.2996 | 0.8774 |
| 0.3478 | 0.46 | 1500 | 0.2894 | 0.8845 |
| 0.2959 | 0.61 | 2000 | 0.2745 | 0.8883 |
| 0.3228 | 0.76 | 2500 | 0.2640 | 0.8905 |
| 0.2899 | 0.92 | 3000 | 0.2723 | 0.8925 |
| 0.2269 | 1.07 | 3500 | 0.2850 | 0.8935 |
| 0.2614 | 1.22 | 4000 | 0.2607 | 0.8984 |
| 0.2508 | 1.37 | 4500 | 0.2831 | 0.8878 |
| 0.2563 | 1.53 | 5000 | 0.2556 | 0.8960 |
| 0.2485 | 1.68 | 5500 | 0.2618 | 0.9019 |
| 0.2373 | 1.83 | 6000 | 0.2600 | 0.8953 |
| 0.2361 | 1.99 | 6500 | 0.2545 | 0.9023 |
| 0.162 | 2.14 | 7000 | 0.3093 | 0.8997 |
| 0.2115 | 2.29 | 7500 | 0.2685 | 0.9010 |
| 0.176 | 2.44 | 8000 | 0.2966 | 0.8982 |
| 0.2047 | 2.6 | 8500 | 0.2767 | 0.8982 |
| 0.1831 | 2.75 | 9000 | 0.2918 | 0.8968 |
| 0.1818 | 2.9 | 9500 | 0.2818 | 0.8979 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
NBA55/llama2-qlora-finetunined-4-bit-prev-and-4.14k-learning-rate-3e4
|
NBA55
| 2024-01-16T08:11:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-16T08:11:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
tmnam20/mdeberta-v3-base-mnli-100
|
tmnam20
| 2024-01-16T08:10:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T08:08:10Z |
---
language:
- en
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: mdeberta-v3-base-mnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MNLI
type: tmnam20/VieGLUE
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8412327095199349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-mnli-100
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4764
- Accuracy: 0.8412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5194 | 0.41 | 5000 | 0.4901 | 0.8127 |
| 0.4861 | 0.81 | 10000 | 0.4713 | 0.8114 |
| 0.3993 | 1.22 | 15000 | 0.4508 | 0.8285 |
| 0.3867 | 1.63 | 20000 | 0.4546 | 0.8302 |
| 0.3496 | 2.04 | 25000 | 0.4765 | 0.8295 |
| 0.3376 | 2.44 | 30000 | 0.4828 | 0.8315 |
| 0.3104 | 2.85 | 35000 | 0.4852 | 0.8314 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
alibidaran/llama-2-7b-instruction_code
|
alibidaran
| 2024-01-16T08:07:39Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"dataset:Nan-Do/instructional_code-search-net-python",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-01T11:12:24Z |
---
license: apache-2.0
datasets:
- Nan-Do/instructional_code-search-net-python
language:
- en
tags:
- code
---
|
jinymusim/poet-validators
|
jinymusim
| 2024-01-16T08:04:33Z | 0 | 0 | null |
[
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-01-14T11:05:24Z |
---
license: cc-by-sa-4.0
---
# Model Generation testing and validating
Validator models with scripts for strophe generation and validation
## Usage
Use the inlcuded script ```simple_generation_player.py``` to start generating
|
Seokeon/full_pp_rc_car
|
Seokeon
| 2024-01-16T07:58:44Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T06:50:50Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks toy
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/full_pp_rc_car
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks toy using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
fuyu-quant/ibl-regression-ver2-all
|
fuyu-quant
| 2024-01-16T07:54:40Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-16T07:54:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
567-labs/bge-base-en-v1.5-ft-quora-0.1
|
567-labs
| 2024-01-16T07:50:31Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-16T07:50:13Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 885 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
LoneStriker/Yi-34Bx2-MoE-60B-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-16T07:46:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T07:36:53Z |
---
license: cc-by-nc-4.0
---
# Yi based MOE 2x34B with mixtral architecture
Highest score Model ranked by Open LLM Leaderboard (2024-01-11)
* [Average Score 76.72](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
This is an English & Chinese MoE Model , slightly different with [cloudyu/Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B), and also based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Yi-34Bx2-MoE-60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
radames/sd-21-DPO-LoRA
|
radames
| 2024-01-16T07:44:11Z | 144 | 6 |
diffusers
|
[
"diffusers",
"text-to-image",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"region:us"
] |
text-to-image
| 2024-01-07T20:04:09Z |
---
library_name: diffusers
pipeline_tag: text-to-image
inference: true
base_model: stabilityai/stable-diffusion-2-1
---
# DPO LoRA Stable Diffusion v2-1
Model trained with LoRA implementation of Diffusion DPO Read more [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/diffusion_dpo)
Base Model: https://huggingface.co/stabilityai/stable-diffusion-2-1
## Running with [🧨 diffusers library](https://github.com/huggingface/diffusers)
```python
from diffusers import DiffusionPipeline
from diffusers.utils import make_image_grid
import torch
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/sd-turbo", # SD Turbo is a destilled version of Stable Diffusion 2.1
# "stabilityai/stable-diffusion-2-1", # for the original stable diffusion 2.1 model
torch_dtype=torch.float16, variant="fp16"
)
pipe.to("cuda")
pipe.load_lora_weights("radames/sd-21-DPO-LoRA", adapter_name="dpo-lora-sd21")
pipe.set_adapters(["dpo-lora-sd21"], adapter_weights=[1.0]) # you can play with adapter_weights to increase the effect of the LoRA model
seed = 123123
prompt = "portrait headshot professional of elon musk"
negative_prompt = "3d render, cartoon, drawing, art, low light"
generator = torch.Generator().manual_seed(seed)
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=512,
height=512,
num_inference_steps=2,
generator=generator,
guidance_scale=1.0,
num_images_per_prompt=4
).images
make_image_grid(images, 1, 4)
```
## Guidance Scale vs LoRA weights

## Examples
Left Withoud DPO right with DPO LoRA
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/R8E0hRpWIE6OhhtvgJeEU.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/Eg4LbyxCfhmsk2INzqODw.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/GD7KumSCNweBWMJ1TArI-.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/SO7QoA9lZJY9hI0U4fBLy.png style="max-width: 60rem;">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6064e095abd8d3692e3e2ed6/ZWbQwIQ5OklEgF9RW581R.png style="max-width: 60rem;">
|
damojay/taml
|
damojay
| 2024-01-16T07:38:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-16T07:37:42Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
youdiniplays/ceb-tl-model
|
youdiniplays
| 2024-01-16T07:27:27Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-15T10:31:35Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: ceb-tl-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ceb-tl-model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6649
- Bleu: 3.6178
- Gen Len: 18.154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.0551 | 1.0 | 6516 | 0.9019 | 2.8382 | 18.183 |
| 0.879 | 2.0 | 13032 | 0.7772 | 3.1412 | 18.182 |
| 0.7844 | 3.0 | 19548 | 0.7146 | 3.4508 | 18.18 |
| 0.728 | 4.0 | 26064 | 0.6773 | 3.5651 | 18.17 |
| 0.6838 | 5.0 | 32580 | 0.6649 | 3.6178 | 18.154 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
FelixChao/NinjaDolphin-7B
|
FelixChao
| 2024-01-16T07:25:18Z | 1,375 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"beowolx/CodeNinja-1.0-OpenChat-7B",
"beowolx/MistralHermes-CodePro-7B-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T14:14:49Z |
---
license: apache-2.0
tags:
- merge
- beowolx/CodeNinja-1.0-OpenChat-7B
- beowolx/MistralHermes-CodePro-7B-v1
model-index:
- name: NinjaDolphin-7B
results:
- task:
type: text-generation # Required. Example: automatic-speech-recognition
dataset:
type: openai_humaneval # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: HumanEval # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: pass@1 # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 52.4390243902439 # Required. Example: 20.90
name: pass@1 # Optional. Example: Test WER
verified: false
---
# NinjaDolphin-7B
NinjaDolphin-7B is a merge of the following models using:
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
Improving coding ability from [FelixChao/WizardDolphin-7B](https://huggingface.co/FelixChao/WizardDolphin-7B).
## HumanEval (uninstructed and no post-process)
| Metric | Value |
| --- | --- |
| humaneval-python |52.4390243902439|

## 🧩 Configuration
```yaml
models:
- model: FelixChao/WizardDolphin-7B
- model: beowolx/CodeNinja-1.0-OpenChat-7B
parameters:
density: 0.53
weight: 0.3
- model: beowolx/MistralHermes-CodePro-7B-v1
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: FelixChao/WizardDolphin-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "FelixChao/NinjaDolphin-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__NinjaDolphin-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.74|
|AI2 Reasoning Challenge (25-Shot)|65.61|
|HellaSwag (10-Shot) |85.35|
|MMLU (5-Shot) |64.43|
|TruthfulQA (0-shot) |54.94|
|Winogrande (5-shot) |80.27|
|GSM8k (5-shot) |67.85|
|
FelixChao/WizardDolphin-7B
|
FelixChao
| 2024-01-16T07:24:39Z | 1,376 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"WizardLM/WizardMath-7B-V1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T09:36:34Z |
---
license: apache-2.0
tags:
- merge
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
- WizardLM/WizardMath-7B-V1.1
---
# WizardDolphin-7B
WizardDolphin-7B is a merge of the following models:
* [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
parameters:
density: 0.5
weight: 0.5
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "FelixChao/WizardDolphin-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Sharathhebbar24/llama_chat_small_7b
|
Sharathhebbar24
| 2024-01-16T07:20:58Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-16T07:17:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tmnam20/bert-base-multilingual-cased-vsmec-100
|
tmnam20
| 2024-01-16T07:19:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:18:38Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vsmec-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSMEC
type: tmnam20/VieGLUE
config: vsmec
split: validation
args: vsmec
metrics:
- name: Accuracy
type: accuracy
value: 0.5364431486880467
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vsmec-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VSMEC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3263
- Accuracy: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0403 | 2.87 | 500 | 1.3329 | 0.5335 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Wembo/a2c-PandaReachDense-v3
|
Wembo
| 2024-01-16T07:19:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-16T07:15:02Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tmnam20/bert-base-multilingual-cased-qqp-1
|
tmnam20
| 2024-01-16T07:17:25Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:16:17Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-cased-qqp-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QQP
type: tmnam20/VieGLUE
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8912441256492704
- name: F1
type: f1
value: 0.8515680383485805
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-qqp-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2978
- Accuracy: 0.8912
- F1: 0.8516
- Combined Score: 0.8714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3241 | 0.44 | 5000 | 0.3155 | 0.8585 | 0.8090 | 0.8337 |
| 0.3239 | 0.88 | 10000 | 0.2986 | 0.8655 | 0.8091 | 0.8373 |
| 0.2479 | 1.32 | 15000 | 0.2984 | 0.8762 | 0.8301 | 0.8532 |
| 0.2461 | 1.76 | 20000 | 0.2838 | 0.8818 | 0.8387 | 0.8603 |
| 0.1919 | 2.2 | 25000 | 0.2947 | 0.8887 | 0.8491 | 0.8689 |
| 0.1965 | 2.64 | 30000 | 0.2967 | 0.8896 | 0.8489 | 0.8692 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
KnutJaegersberg/Walter-Phi
|
KnutJaegersberg
| 2024-01-16T07:16:59Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"dataset:KnutJaegersberg/Auton",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T06:02:55Z |
---
license: mit
datasets:
- KnutJaegersberg/Auton
---

Walter is an unaligned, free thinking AI assistant that has been given time to think about things.
It's trained on instruction datasets with open source licenses.
It covers a lot of tasks, 2/3 of the samples are from large datasets like flan, but also other datasets.
It knows a few tricks, shown by the prompting examples below:
General Prompt Examples:
```
### Instruction:
Please answer the following question: Process: - The bat sends out sound waves from their mouth or nose - The sound waves hit an object - Echoes are produced - The echo returns to the bat's ears - The bat hears the echo - The bat can figure out where the object is located. suppose No bats live here happens, how will it affect STRONGER echolocation. Which of the following is the supposed perturbation? - directly impacting a step of the process - indirectly impacting a step of the process - not impacting any step of the process Answer:
### Response:
indirectly impacting a step of the process
```
```
### Instruction:
You will be given a text in Russian language which contain different emotion labels from the list - ['joy', ' sadness', 'surprise', 'fear', 'anger']. You need to output the incorrect emotion label, which is irrelevant to the input text. Your answer (i) should contain only one emotion label (ii) should be unambiguous.
### Input:
То чувство, когда идешь домой после 3-х часовой тренировки и в подъезде пахнет запеченной курицей - зачем вы меня дразните :(
### Response:
surprise
```
CoT Prompt Example:
```
### Instruction:
In this task, you are given a sentence with a missing word that can be an object, a person, and/or an action. Fill in the blank with a plausible word. Although each sentence has many correct answers, you only have to write one answer. PersonX puts ___ on PersonY's car
### Reasoning:
Since PersonX is putting water on PersonY's car, it could be assumed that the car is on fire.
### Response:
water
```
Summarization Example:
```
### Instruction:
Below is a text. Write a comprehensive, concise and coherent summary.
### Input:
Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly. ==Overview== Embodied bilingual language refers to the role ......
### Response:
Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly.
```
Essay Example:
```
### Instruction:
Write the original essay for the following summary: A sleeveless dress with a round neck is a great day dress and can be worn for both formal and casual occasions. It has a sweetheart neckline and is made from 100% silk. The dress has a chiffon overlay that covers up the miniskirt and still adds to the class of the dress.
### Response:
Sleeveless Dress with a Round Neck Essay Casual wear, day dress, more formal dress The garment is a sleeveless dress with a round neck and has a sweetheart neckline. It has sheer detail on the neckline and on the back. These aspects of design make it suitable to be worn for day and evening occasions. Its great detail and chicness make it suitable for more formal events, ....
```
|
tmnam20/bert-base-multilingual-cased-wnli-10
|
tmnam20
| 2024-01-16T07:16:16Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:15:04Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-wnli-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/WNLI
type: tmnam20/VieGLUE
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-wnli-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
brucethemoose/Capybara-Fixed-Temp
|
brucethemoose
| 2024-01-16T07:15:10Z | 8 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"sft",
"Yi-34B-200K",
"eng",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T06:19:35Z |
---
language:
- eng
tags:
- sft
- Yi-34B-200K
license:
- mit
datasets:
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
---
## **Nous-Capybara-34B V1.9**
**This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!**
**First 34B Nous model and first 200K context length Nous model!**
The Capybara series is the first Nous collection of models made by fine-tuning mostly on data created by Nous in-house.
We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly regarded datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin).
While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples.
## Process of creation and special thank yous!
This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations.
Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project.
## Thank you to those of you that have indirectly contributed!
While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis.
The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project.
Datasets in Blue are in-house curations that previously existed prior to Capybara.

## Prompt Format
The reccomended model usage is:
Prefix: ``USER:``
Suffix: ``ASSISTANT:``
Stop token: ``</s>``
## Mutli-Modality!
- We currently have a Multi-modal model based on Capybara V1.9!
https://huggingface.co/NousResearch/Obsidian-3B-V0.5
it is currently only available as a 3B sized model but larger versions coming!
## Notable Features:
- Uses Yi-34B model as the base which is trained for 200K context length!
- Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!)
- Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.)
- Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house)
- Ability to recall information upto late 2022 without internet.
- Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts.
## Example Outputs from Capybara V1.9 7B version! (examples from 34B coming soon):



## Benchmarks! (Coming soon!)
## Future model sizes
Capybara V1.9 now currently has a 3B, 7B and 34B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Dataset contamination.
We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found.
We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.
The following are benchmarks we checked for contamination against our dataset:
- HumanEval
- AGIEval
- TruthfulQA
- MMLU
- GPT4All
|
nullne/ppo-Huggy
|
nullne
| 2024-01-16T07:12:48Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-16T07:12:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nullne/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tmnam20/bert-base-multilingual-cased-mnli-10
|
tmnam20
| 2024-01-16T07:09:03Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:07:54Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-mnli-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MNLI
type: tmnam20/VieGLUE
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.7999389747762409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-mnli-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5432
- Accuracy: 0.7999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6369 | 0.41 | 5000 | 0.6399 | 0.7401 |
| 0.5945 | 0.81 | 10000 | 0.5746 | 0.7680 |
| 0.4847 | 1.22 | 15000 | 0.5817 | 0.7773 |
| 0.5109 | 1.63 | 20000 | 0.5680 | 0.7790 |
| 0.3754 | 2.04 | 25000 | 0.5796 | 0.7890 |
| 0.3989 | 2.44 | 30000 | 0.5581 | 0.7892 |
| 0.4013 | 2.85 | 35000 | 0.5501 | 0.7955 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-qnli-1
|
tmnam20
| 2024-01-16T07:05:30Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:04:21Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-qnli-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QNLI
type: tmnam20/VieGLUE
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.885227896760022
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-qnli-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3278
- Accuracy: 0.8852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3938 | 0.15 | 500 | 0.3494 | 0.8495 |
| 0.3712 | 0.31 | 1000 | 0.3266 | 0.8570 |
| 0.3837 | 0.46 | 1500 | 0.3174 | 0.8655 |
| 0.3466 | 0.61 | 2000 | 0.2957 | 0.8785 |
| 0.3084 | 0.76 | 2500 | 0.3093 | 0.8715 |
| 0.322 | 0.92 | 3000 | 0.2950 | 0.8731 |
| 0.273 | 1.07 | 3500 | 0.2872 | 0.8834 |
| 0.2628 | 1.22 | 4000 | 0.3110 | 0.8794 |
| 0.2732 | 1.37 | 4500 | 0.2910 | 0.8797 |
| 0.2592 | 1.53 | 5000 | 0.2855 | 0.8849 |
| 0.241 | 1.68 | 5500 | 0.2974 | 0.8861 |
| 0.2256 | 1.83 | 6000 | 0.2914 | 0.8850 |
| 0.2402 | 1.99 | 6500 | 0.2759 | 0.8883 |
| 0.1958 | 2.14 | 7000 | 0.3080 | 0.8880 |
| 0.1684 | 2.29 | 7500 | 0.3190 | 0.8847 |
| 0.1472 | 2.44 | 8000 | 0.3305 | 0.8871 |
| 0.1601 | 2.6 | 8500 | 0.3298 | 0.8836 |
| 0.1857 | 2.75 | 9000 | 0.3274 | 0.8847 |
| 0.1667 | 2.9 | 9500 | 0.3256 | 0.8841 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-qqp-100
|
tmnam20
| 2024-01-16T07:04:20Z | 96 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:03:08Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-cased-qqp-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/QQP
type: tmnam20/VieGLUE
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8905515706158793
- name: F1
type: f1
value: 0.8513354611120443
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-qqp-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2983
- Accuracy: 0.8906
- F1: 0.8513
- Combined Score: 0.8709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3417 | 0.44 | 5000 | 0.3198 | 0.8578 | 0.8057 | 0.8317 |
| 0.2998 | 0.88 | 10000 | 0.2908 | 0.8724 | 0.8252 | 0.8488 |
| 0.2629 | 1.32 | 15000 | 0.2970 | 0.8763 | 0.8300 | 0.8532 |
| 0.2269 | 1.76 | 20000 | 0.2874 | 0.8845 | 0.8405 | 0.8625 |
| 0.1933 | 2.2 | 25000 | 0.2962 | 0.8867 | 0.8470 | 0.8669 |
| 0.1752 | 2.64 | 30000 | 0.3174 | 0.8895 | 0.8497 | 0.8696 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Felix92/doctr-dummy-tf-textnet-tiny
|
Felix92
| 2024-01-16T07:02:28Z | 2 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-01-16T07:02:23Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
tmnam20/bert-base-multilingual-cased-vtoc-1
|
tmnam20
| 2024-01-16T07:01:59Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T07:00:47Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vtoc-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- name: Accuracy
type: accuracy
value: 0.8083014746040416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vtoc-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6734
- Accuracy: 0.8083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4828 | 2.19 | 500 | 0.7023 | 0.8012 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
thanhnew2001/starcoder-7b-taipy25
|
thanhnew2001
| 2024-01-16T06:59:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T04:53:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gopal2002/setfit_zeon
|
Gopal2002
| 2024-01-16T06:58:07Z | 48 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] |
text-classification
| 2024-01-02T11:35:28Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: <s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice>
1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA
BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice>
12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice>
12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice>
12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price>
4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt>
1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price>
3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price>
3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>
- text: <s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LIIMITED</s_nm><s_discountprice>
-*OQU<sep/><s_nm> PYCHE DESIGNCE PURCHASE ORDER</s_nm><sep/><s_nm> WHOCO SUSHINGGA
CHOCO SUSHINGGA CHOCO SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG
SUSHINGGANG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANGHONG
SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG
SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONGHONG
POWER</s_nm><s_price> SUSHINGGANGHONGHONGHONG POWER</s_nm><s_price> SUSHINGGANGHONGHONG
POWER</s_nm><s_price> SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGA SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA
- text: <s_cord-v2><s_menu><s_nm> TAX INVOLICE</s_nm><s_unitprice> 2310</s_unitprice><s_cnt>
2</s_cnt><s_price> A</s_price><sep/><s_nm> BLOOM Combustion India Putu</s_nm><s_unitprice>
150,000</s_unitprice><s_cnt> 2</s_cnt><s_discountprice> 1,040<sep/><s_nm> A.C.B.C.B.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C
- text: <s_cord-v2><s_menu><s_nm> HINA DLCO INDUSTRIES LIMITED</s_nm><s_price> SUSHIZE</s_price><sep/><s_nm>
PONE CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO
CHOCOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
- text: '<s_cord-v2><s_menu><s_nm> HNDALCO INDUSTRIES LID. HIRAKUND POWER</s_nm><s_num>
ASH WITCH BRIOGE</s_nm><s_num> HPOM: 01-Hou DATE: 0001-social<sep/><s_nm> SAH</s_nm><s_num>
DAGE NUMBER : 1</s_etc><sep/><s_nm> SINO TAKING ODAYS OATE INTINE TAKE CROSS Wc
OLOAD SLOOPPERATOR</s_nm><s_num> JGGC</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JERCEA</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER<s_num>
JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num>
JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_num><s_price> 0.00</s_price><sep/><s_nm> ORANGA</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_total>'
pipeline_tag: text-classification
inference: true
base_model: BAAI/bge-small-en-v1.5
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'<s_cord-v2><s_menu><s_nm> M/S. JOSEPH SUNA</s_nm><s_num> DATABI<sep/><s_nm> Tankipaa.MIRAKU SAMBALPUR.768Q16</s_nm><s_num> DMB Nb N.9861345883<sep/><s_nm> Deals Which :Altypes of ChickenGNALOR REGUMING)</s_nm><s_num> DISALI<sep/><s_nm> WINNALIZED</s_nm><s_num> CHOCO SUSPECIALIZE</s_nm><s_num> TWICENCHE<sep/><s_nm> SHRANGKANG POWER</s_nm><s_num> LATHOCO TWICENKO:</s_nm><s_num> JERYUNG CHOCO TWICENKO:</s_nm><s_num> JERYUNG HZYGANGKAN<sep/><s_nm> DIFF-SAWALAPUKU SAMBALPUR.76801GHOLIZEG DATE</s_nm><s_num> DATE</s_nm><s_num> DATE:</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> PAN No.:</s_nm><s_num> PPODATE</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> DATE OPSE<sep/><s_nm> HANDUPPOWER</s_nm><s_num> 30.12221</s_num><s_price> 1,945.00</s_price><sep/><s_nm> SUSPENGGANGURG.GUSTAGUR GUSTAGANGKANGURGUSTAGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTG'</li><li>'<s_cord-v2><s_menu><s_nm> GST INVOLICE</s_nm><s_price> ORIGINAL FOR KEGINGLI</s_nm><s_price> WOUCE BREGRAMING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHI'</li><li>'<s_cord-v2><s_menu><s_nm> TAX INVOICE</s_nm><s_price> ORIGINAL FOR AQUALIZE</s_nm><s_price> SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO '</li></ul> |
| 1 | <ul><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIRAKUD POWER</s_nm><sep/><s_nm> ASH WPTCH BRIOGE</s_nm><s_unitprice> TIMOL CATE BRIOUS DATE</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MSCED</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MICHI CHOCO KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KE'</li><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIR A KD POWER</s_nm></s_sub><sep/><s_nm> ASH WEICH BRIOGE</s_nm><s_unitprice> 16.36.36m2</s_unitprice><s_cnt> AGE IMPL CAST SUSIC :RING LETS SUSIC SUSIC SUSIC SUSIC SUSIC SUSIC SUSCCE</s_nm></s_sub><sep/><s_nm> MSCHO</s_nm><s_unitprice> 13.45</s_unitprice><s_cnt> 1.36.36</s_cnt><s_price> 6.36</s_price><sep/><s_nm> SUSPICY TEMPLE</s_nm><s_unitprice> 14.50.13.502</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREAT TRIPSE TO WBLE</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 5.940</s_price><sep/><s_nm> 0.00</s_price><sep/><s_nm> BRETYPETROPICPICPICPICYE</s_nm><s_unitprice> 13.50</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATPICYEPIC ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.50</s_cnt><s_price> 5.940.00</s_price></s_menu><s_sub_total><s_subtotal_price> 0.00</s_subtotal_price><s_tax_price> 13.50</s_tax_price></s_sub_total><s_total><s_total_price> 31.00</s_cnt><s_price> BK.00</s_total_price></s_total>'</li><li>'<s_cord-v2><s_menu><s_nm> ORI ZHDLE TOMI O JAPAN SUSHIKA JERYA CHARGE</s_nm><s_unitprice> @SAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKAStakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakat'</li></ul> |
| 0 | <ul><li>'<s_cord-v2><s_menu><s_nm> HANDALCO 이미지ES LIMITED</s_nm><s_price> SUNDAYGHOCO SUSHIZEH CINCEHANGKAGHOCO SUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKANG PURCHASE ORDER</s_nm><sep/><s_nm> WANTE CHOCO CAKE CONSULATANCE PYI LOTHO NUMPIC UPICK CHOCO CHOCO CHOCOCO SUSHIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHER</s_nm><s_discountprice>Nt.Minitie HGHOCEHINE</s_nm><s_discountprice>N.Minitie HGUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N.Minitie HUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N</s_discountprice><s_price> 436.0</s_price><sep/><s_nm> OxMini WHEN HUMAGHUNG</s_nm><s_discountprice> SUSHIZEHITEGHOUSHILIZEHENCE COTTING THOGEHGHOCO SUSHIZEHITEGHTGHOLIZEHGHOLIZEHGHOLIZEHGHOLIZEHGPICYGLIZEHGHTG SOUTING SUSHIZEHITEGHTGHOLIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEH'</li><li>'<s_cord-v2><s_menu><s_nm> WINGllaco Industries Limited</s_nm><s_unitprice> LIKING PICCE CHOCOLOGY VICE</s_nm><s_unitprice> LIKING SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> HINDALCO INDUSTRIES LIMITED</s_nm><s_price> GSTING&NAACHI201</s_price><sep/><s_nm> WBABUPOWER HEROGUSTAMPURGANGKANCE CHOCOLOGALINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Gopal2002/setfit_zeon")
# Run inference
preds = model("<s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice> 1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice> 12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice> 12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> 4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price> 3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price> 3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 5 | 107.8041 | 763 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 47 |
| 1 | 51 |
| 2 | 50 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0022 | 1 | 0.3004 | - |
| 0.1094 | 50 | 0.2457 | - |
| 0.2188 | 100 | 0.1464 | - |
| 0.3282 | 150 | 0.0079 | - |
| 0.4376 | 200 | 0.0028 | - |
| 0.5470 | 250 | 0.0027 | - |
| 0.6565 | 300 | 0.0017 | - |
| 0.7659 | 350 | 0.0014 | - |
| 0.8753 | 400 | 0.0015 | - |
| 0.9847 | 450 | 0.0011 | - |
| 1.0941 | 500 | 0.001 | - |
| 1.2035 | 550 | 0.0011 | - |
| 1.3129 | 600 | 0.001 | - |
| 1.4223 | 650 | 0.0011 | - |
| 1.5317 | 700 | 0.0011 | - |
| 1.6411 | 750 | 0.0009 | - |
| 1.7505 | 800 | 0.0008 | - |
| 1.8600 | 850 | 0.001 | - |
| 1.9694 | 900 | 0.0009 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.2
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
tmnam20/bert-base-multilingual-cased-vsfc-100
|
tmnam20
| 2024-01-16T06:54:35Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:53:27Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vsfc-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSFC
type: tmnam20/VieGLUE
config: vsfc
split: validation
args: vsfc
metrics:
- name: Accuracy
type: accuracy
value: 0.936197094125079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vsfc-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VSFC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2293
- Accuracy: 0.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2138 | 1.4 | 500 | 0.2124 | 0.9330 |
| 0.1394 | 2.79 | 1000 | 0.2373 | 0.9349 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Sharathhebbar24/lite_llama_small_chat_v1
|
Sharathhebbar24
| 2024-01-16T06:53:16Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-16T06:52:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tmnam20/bert-base-multilingual-cased-mrpc-1
|
tmnam20
| 2024-01-16T06:52:18Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:51:01Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-cased-mrpc-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MRPC
type: tmnam20/VieGLUE
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8529411764705882
- name: F1
type: f1
value: 0.8884758364312267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-mrpc-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3289
- Accuracy: 0.8529
- F1: 0.8885
- Combined Score: 0.8707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-rte-10
|
tmnam20
| 2024-01-16T06:51:01Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:49:53Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-rte-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6498194945848376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-rte-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6733
- Accuracy: 0.6498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Seokeon/full_pp_berry_bowl
|
Seokeon
| 2024-01-16T06:50:27Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-16T04:57:00Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks bowl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Seokeon/full_pp_berry_bowl
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks bowl using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
JaehwiJeon/videomae-base-finetuned-ucf101-subset
|
JaehwiJeon
| 2024-01-16T06:49:23Z | 48 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-01-16T06:13:31Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2485
- Accuracy: 0.9032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7587 | 0.25 | 75 | 1.2436 | 0.6714 |
| 0.9272 | 1.25 | 150 | 0.6259 | 0.7857 |
| 0.2074 | 2.25 | 225 | 0.4821 | 0.8429 |
| 0.2188 | 3.25 | 300 | 0.1336 | 0.9571 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-cola-10
|
tmnam20
| 2024-01-16T06:48:42Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:47:22Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- matthews_correlation
model-index:
- name: bert-base-multilingual-cased-cola-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/COLA
type: tmnam20/VieGLUE
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.1009230023823325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-cola-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6448
- Matthews Correlation: 0.1009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5762 | 1.87 | 500 | 0.6181 | 0.0372 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-vsmec-10
|
tmnam20
| 2024-01-16T06:47:21Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:46:02Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vsmec-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VSMEC
type: tmnam20/VieGLUE
config: vsmec
split: validation
args: vsmec
metrics:
- name: Accuracy
type: accuracy
value: 0.5102040816326531
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vsmec-10
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VSMEC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3252
- Accuracy: 0.5102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0922 | 2.87 | 500 | 1.3293 | 0.5058 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-mnli-100
|
tmnam20
| 2024-01-16T06:46:02Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:44:44Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-mnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MNLI
type: tmnam20/VieGLUE
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.806346623270952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-mnli-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5343
- Accuracy: 0.8063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.62 | 0.41 | 5000 | 0.6193 | 0.7459 |
| 0.5923 | 0.81 | 10000 | 0.5911 | 0.7610 |
| 0.5136 | 1.22 | 15000 | 0.5670 | 0.7808 |
| 0.4927 | 1.63 | 20000 | 0.5558 | 0.7852 |
| 0.4425 | 2.04 | 25000 | 0.5809 | 0.7844 |
| 0.4301 | 2.44 | 30000 | 0.5546 | 0.7940 |
| 0.4017 | 2.85 | 35000 | 0.5565 | 0.7963 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-wnli-1
|
tmnam20
| 2024-01-16T06:44:43Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:43:24Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-wnli-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/WNLI
type: tmnam20/VieGLUE
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.49295774647887325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-wnli-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6946
- Accuracy: 0.4930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-rte-1
|
tmnam20
| 2024-01-16T06:43:23Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:42:08Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-rte-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/RTE
type: tmnam20/VieGLUE
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6570397111913358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-rte-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6715
- Accuracy: 0.6570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Suva/bge-base-finetune-v2
|
Suva
| 2024-01-16T06:42:58Z | 55 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-16T06:32:48Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 167 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 25,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 417,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
tmnam20/bert-base-multilingual-cased-wnli-100
|
tmnam20
| 2024-01-16T06:42:07Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:40:49Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-wnli-100
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/WNLI
type: tmnam20/VieGLUE
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5352112676056338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-wnli-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Accuracy: 0.5352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmnam20/bert-base-multilingual-cased-mnli-1
|
tmnam20
| 2024-01-16T06:40:48Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T06:39:31Z |
---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-mnli-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MNLI
type: tmnam20/VieGLUE
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8031936533767291
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-mnli-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5349
- Accuracy: 0.8032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8082 | 0.04 | 500 | 0.7958 | 0.6485 |
| 0.7259 | 0.08 | 1000 | 0.7455 | 0.6895 |
| 0.7018 | 0.12 | 1500 | 0.6970 | 0.7118 |
| 0.7026 | 0.16 | 2000 | 0.6827 | 0.7127 |
| 0.6696 | 0.2 | 2500 | 0.6500 | 0.7323 |
| 0.6744 | 0.24 | 3000 | 0.6345 | 0.7380 |
| 0.6136 | 0.29 | 3500 | 0.6294 | 0.7402 |
| 0.632 | 0.33 | 4000 | 0.6269 | 0.7472 |
| 0.6735 | 0.37 | 4500 | 0.6195 | 0.7489 |
| 0.6202 | 0.41 | 5000 | 0.6336 | 0.7414 |
| 0.6495 | 0.45 | 5500 | 0.6125 | 0.7517 |
| 0.6235 | 0.49 | 6000 | 0.6097 | 0.7515 |
| 0.5852 | 0.53 | 6500 | 0.6068 | 0.7581 |
| 0.6395 | 0.57 | 7000 | 0.6039 | 0.7493 |
| 0.6009 | 0.61 | 7500 | 0.5878 | 0.7553 |
| 0.6059 | 0.65 | 8000 | 0.5876 | 0.7638 |
| 0.6019 | 0.69 | 8500 | 0.5829 | 0.7651 |
| 0.5989 | 0.73 | 9000 | 0.5922 | 0.7612 |
| 0.6195 | 0.77 | 9500 | 0.5868 | 0.7615 |
| 0.6028 | 0.81 | 10000 | 0.5724 | 0.7709 |
| 0.5741 | 0.86 | 10500 | 0.5670 | 0.7717 |
| 0.582 | 0.9 | 11000 | 0.5702 | 0.7732 |
| 0.5706 | 0.94 | 11500 | 0.5597 | 0.7755 |
| 0.5676 | 0.98 | 12000 | 0.5655 | 0.7735 |
| 0.5235 | 1.02 | 12500 | 0.5849 | 0.7662 |
| 0.521 | 1.06 | 13000 | 0.5646 | 0.7788 |
| 0.5122 | 1.1 | 13500 | 0.5717 | 0.7738 |
| 0.5102 | 1.14 | 14000 | 0.5667 | 0.7765 |
| 0.5152 | 1.18 | 14500 | 0.5598 | 0.7780 |
| 0.4904 | 1.22 | 15000 | 0.5693 | 0.7746 |
| 0.507 | 1.26 | 15500 | 0.5584 | 0.7804 |
| 0.5163 | 1.3 | 16000 | 0.5570 | 0.7787 |
| 0.4921 | 1.34 | 16500 | 0.5727 | 0.7798 |
| 0.5249 | 1.39 | 17000 | 0.5653 | 0.7789 |
| 0.4994 | 1.43 | 17500 | 0.5726 | 0.7783 |
| 0.5335 | 1.47 | 18000 | 0.5547 | 0.7848 |
| 0.543 | 1.51 | 18500 | 0.5541 | 0.7785 |
| 0.5138 | 1.55 | 19000 | 0.5569 | 0.7842 |
| 0.4626 | 1.59 | 19500 | 0.5625 | 0.7860 |
| 0.4828 | 1.63 | 20000 | 0.5434 | 0.7858 |
| 0.5121 | 1.67 | 20500 | 0.5495 | 0.7806 |
| 0.5012 | 1.71 | 21000 | 0.5318 | 0.7900 |
| 0.4609 | 1.75 | 21500 | 0.5485 | 0.7878 |
| 0.4928 | 1.79 | 22000 | 0.5462 | 0.7868 |
| 0.4922 | 1.83 | 22500 | 0.5305 | 0.7920 |
| 0.4913 | 1.87 | 23000 | 0.5396 | 0.7891 |
| 0.4992 | 1.91 | 23500 | 0.5341 | 0.7952 |
| 0.4732 | 1.96 | 24000 | 0.5277 | 0.7952 |
| 0.4925 | 2.0 | 24500 | 0.5339 | 0.7943 |
| 0.4098 | 2.04 | 25000 | 0.5643 | 0.7911 |
| 0.4168 | 2.08 | 25500 | 0.5534 | 0.7929 |
| 0.4099 | 2.12 | 26000 | 0.5674 | 0.7925 |
| 0.4142 | 2.16 | 26500 | 0.5652 | 0.7918 |
| 0.398 | 2.2 | 27000 | 0.5875 | 0.7899 |
| 0.3899 | 2.24 | 27500 | 0.5726 | 0.7975 |
| 0.403 | 2.28 | 28000 | 0.5596 | 0.7968 |
| 0.399 | 2.32 | 28500 | 0.5716 | 0.7885 |
| 0.4176 | 2.36 | 29000 | 0.5570 | 0.7941 |
| 0.3871 | 2.4 | 29500 | 0.5689 | 0.7926 |
| 0.4156 | 2.44 | 30000 | 0.5648 | 0.7918 |
| 0.386 | 2.49 | 30500 | 0.5650 | 0.7931 |
| 0.4131 | 2.53 | 31000 | 0.5525 | 0.7948 |
| 0.4202 | 2.57 | 31500 | 0.5585 | 0.7914 |
| 0.4129 | 2.61 | 32000 | 0.5495 | 0.7963 |
| 0.4215 | 2.65 | 32500 | 0.5524 | 0.7978 |
| 0.413 | 2.69 | 33000 | 0.5578 | 0.7954 |
| 0.4296 | 2.73 | 33500 | 0.5509 | 0.7966 |
| 0.3602 | 2.77 | 34000 | 0.5581 | 0.7974 |
| 0.3901 | 2.81 | 34500 | 0.5561 | 0.7985 |
| 0.4163 | 2.85 | 35000 | 0.5502 | 0.7955 |
| 0.3787 | 2.89 | 35500 | 0.5573 | 0.7951 |
| 0.4285 | 2.93 | 36000 | 0.5535 | 0.7958 |
| 0.3578 | 2.97 | 36500 | 0.5563 | 0.7964 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Makhmud/whisper-uzbek
|
Makhmud
| 2024-01-16T06:38:57Z | 157 | 1 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"uz",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-16T05:54:30Z |
---
language:
- uz
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Uz - Makhmud Jumanazarov
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Uz - Makhmud Jumanazarov
This model is a fine-tuned version of [openai/whisper](https://huggingface.co/openai/whisper) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3416
- Wer: 34.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4794 | 0.54 | 1000 | 0.4504 | 42.0722 |
| 0.313 | 1.08 | 2000 | 0.3821 | 38.9392 |
| 0.2948 | 1.62 | 3000 | 0.3468 | 35.4270 |
| 0.249 | 2.16 | 4000 | 0.3416 | 34.9285 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.