modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Inv/NoroIchiChat-7B
|
Inv
| 2024-01-13T20:43:13Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-0106",
"NeverSleep/Noromaid-7B-0.4-DPO",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T20:38:58Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-0106
- NeverSleep/Noromaid-7B-0.4-DPO
---
# NoroIchiChat-7B
NoroIchiChat-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
## 🧩 Configuration
```yaml
models:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
- model: openchat/openchat-3.5-0106
parameters:
weight: 0.4
density: 0.52
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
weight: 0.3
density: 0.42
merge_method: dare_ties
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
tokenizer_source: union
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
|
jysssacc/mt0-large_adalora_627_lr5e-05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-13T20:43:02Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-large",
"base_model:adapter:bigscience/mt0-large",
"license:apache-2.0",
"region:us"
] | null | 2024-01-13T18:11:01Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-large
model-index:
- name: mt0-large_adalora_627_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-large_adalora_627_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-large](https://huggingface.co/bigscience/mt0-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4956 | 1.0 | 157 | 1.1954 |
| 1.2811 | 2.0 | 314 | 0.9735 |
| 1.0748 | 3.0 | 471 | 0.3760 |
| 0.3128 | 4.0 | 628 | 0.1451 |
| 0.2162 | 5.0 | 785 | 0.1217 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
imvladikon/sentence_transformers_alephbertgimmel_small
|
imvladikon
| 2024-01-13T20:41:50Z | 53 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"he",
"dataset:imvladikon/stsb_he",
"dataset:imvladikon/wikianswers_hebrew",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-22T20:14:37Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- imvladikon/stsb_he
- imvladikon/wikianswers_hebrew
language:
- he
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`__main__.MultiDatasetDataLoader` of length 10819 with parameters:
```
{'batch_size': 'unknown'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
TheBloke/Cosmosis-3x34B-GPTQ
|
TheBloke
| 2024-01-13T20:39:59Z | 18 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"base_model:Weyaxi/Cosmosis-3x34B",
"base_model:quantized:Weyaxi/Cosmosis-3x34B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-13T11:51:53Z |
---
base_model: Weyaxi/Cosmosis-3x34B
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Cosmosis 3X34B
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- yi
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Cosmosis 3X34B - GPTQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Cosmosis 3X34B](https://huggingface.co/Weyaxi/Cosmosis-3x34B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Yağız Çalık's Cosmosis 3X34B](https://huggingface.co/Weyaxi/Cosmosis-3x34B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Cosmosis-3x34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 45.07 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 46.74 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 34.28 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 35.86 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 49.00 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Cosmosis-3x34B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Cosmosis-3x34B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Cosmosis-3x34B-GPTQ`:
```shell
mkdir Cosmosis-3x34B-GPTQ
huggingface-cli download TheBloke/Cosmosis-3x34B-GPTQ --local-dir Cosmosis-3x34B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Cosmosis-3x34B-GPTQ
huggingface-cli download TheBloke/Cosmosis-3x34B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Cosmosis-3x34B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Cosmosis-3x34B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Cosmosis-3x34B-GPTQ --local-dir Cosmosis-3x34B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Cosmosis-3x34B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Cosmosis-3x34B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Cosmosis-3x34B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Cosmosis-3x34B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Cosmosis-3x34B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's Cosmosis 3X34B

# Cosmosis-3x34B
This is the model for Cosmosis-3x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, you can utilize prompt templates provided by bagel and other expert's prompt templates.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Human Asistant
```
Human: {user}
### Assistant: {asistant}
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
negative_prompts: ["math", "reason", "mathematics", "solve", "count", "code", "python", "javascript", "programming", "algorithm"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
- source_model: SUS-Chat-34B
positive_prompts: ["math", "reason", "mathematics", "solve", "count", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Cosmosis-3x34B-GPTQ](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ)
##### GGUF
- [TheBloke/Cosmosis-3x34B-GGUF](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF)
##### AWQ
- [TheBloke/Cosmosis-3x34B-AWQ](https://huggingface.co/TheBloke/Cosmosis-3x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
TheBloke/Cosmosis-3x34B-GGUF
|
TheBloke
| 2024-01-13T20:39:04Z | 58 | 6 |
transformers
|
[
"transformers",
"gguf",
"mixtral",
"yi",
"moe",
"base_model:Weyaxi/Cosmosis-3x34B",
"base_model:quantized:Weyaxi/Cosmosis-3x34B",
"license:other",
"region:us",
"conversational"
] | null | 2024-01-13T08:58:55Z |
---
base_model: Weyaxi/Cosmosis-3x34B
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Cosmosis 3X34B
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- yi
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Cosmosis 3X34B - GGUF
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Cosmosis 3X34B](https://huggingface.co/Weyaxi/Cosmosis-3x34B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Yağız Çalık's Cosmosis 3X34B](https://huggingface.co/Weyaxi/Cosmosis-3x34B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Cosmosis-3x34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [cosmosis-3x34b.Q2_K.gguf](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF/blob/main/cosmosis-3x34b.Q2_K.gguf) | Q2_K | 2 | 31.90 GB| 34.40 GB | smallest, significant quality loss - not recommended for most purposes |
| [cosmosis-3x34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF/blob/main/cosmosis-3x34b.Q3_K_M.gguf) | Q3_K_M | 3 | 41.85 GB| 44.35 GB | very small, high quality loss |
| [cosmosis-3x34b.Q4_0.gguf](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF/blob/main/cosmosis-3x34b.Q4_0.gguf) | Q4_0 | 4 | 49.20 GB| 51.70 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| cosmosis-3x34b.Q4_K_M.gguf | Q4_K_M | 4 | 52.66 GB| 55.16 GB | medium, balanced quality - recommended |
| cosmosis-3x34b.Q5_0.gguf | Q5_0 | 5 | 60.04 GB| 62.54 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| cosmosis-3x34b.Q5_K_M.gguf | Q5_K_M | 5 | 61.83 GB| 64.33 GB | large, very low quality loss - recommended |
| cosmosis-3x34b.Q6_K.gguf | Q6_K | 6 | 71.57 GB| 74.07 GB | very large, extremely low quality loss |
| cosmosis-3x34b.Q8_0.gguf | Q8_0 | 8 | 92.70 GB| 95.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `cosmosis-3x34b.Q6_K.gguf-split-a`
* `cosmosis-3x34b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `cosmosis-3x34b.Q8_0.gguf-split-a`
* `cosmosis-3x34b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat cosmosis-3x34b.Q6_K.gguf-split-* > cosmosis-3x34b.Q6_K.gguf && rm cosmosis-3x34b.Q6_K.gguf-split-*
cat cosmosis-3x34b.Q8_0.gguf-split-* > cosmosis-3x34b.Q8_0.gguf && rm cosmosis-3x34b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B cosmosis-3x34b.Q6_K.gguf-split-a + cosmosis-3x34b.Q6_K.gguf-split-b cosmosis-3x34b.Q6_K.gguf
del cosmosis-3x34b.Q6_K.gguf-split-a cosmosis-3x34b.Q6_K.gguf-split-b
COPY /B cosmosis-3x34b.Q8_0.gguf-split-a + cosmosis-3x34b.Q8_0.gguf-split-b cosmosis-3x34b.Q8_0.gguf
del cosmosis-3x34b.Q8_0.gguf-split-a cosmosis-3x34b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Cosmosis-3x34B-GGUF and below it, a specific filename to download, such as: cosmosis-3x34b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Cosmosis-3x34B-GGUF cosmosis-3x34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Cosmosis-3x34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Cosmosis-3x34B-GGUF cosmosis-3x34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m cosmosis-3x34b.Q4_K_M.gguf --color -c 200000 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 200000` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./cosmosis-3x34b.Q4_K_M.gguf", # Download the model file first
n_ctx=200000, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./cosmosis-3x34b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Yağız Çalık's Cosmosis 3X34B

# Cosmosis-3x34B
This is the model for Cosmosis-3x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, you can utilize prompt templates provided by bagel and other expert's prompt templates.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Human Asistant
```
Human: {user}
### Assistant: {asistant}
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
negative_prompts: ["math", "reason", "mathematics", "solve", "count", "code", "python", "javascript", "programming", "algorithm"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
- source_model: SUS-Chat-34B
positive_prompts: ["math", "reason", "mathematics", "solve", "count", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Cosmosis-3x34B-GPTQ](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ)
##### GGUF
- [TheBloke/Cosmosis-3x34B-GGUF](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF)
##### AWQ
- [TheBloke/Cosmosis-3x34B-AWQ](https://huggingface.co/TheBloke/Cosmosis-3x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
<!-- original-model-card end -->
|
Makucas/Mistral-7B-Instruct-v0.2_03
|
Makucas
| 2024-01-13T20:32:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-13T18:56:43Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Mistral-7B-Instruct-v0.2_03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_03
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7989 | 0.29 | 20 | 1.7039 |
| 1.5189 | 0.58 | 40 | 1.5650 |
| 1.4559 | 0.88 | 60 | 1.5602 |
| 1.3482 | 1.17 | 80 | 1.5798 |
| 1.2311 | 1.46 | 100 | 1.5889 |
| 1.2119 | 1.75 | 120 | 1.5672 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jysssacc/mt0-large_lora_627_lr5e-05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-13T20:29:58Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-large",
"base_model:adapter:bigscience/mt0-large",
"license:apache-2.0",
"region:us"
] | null | 2024-01-13T17:50:01Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-large
model-index:
- name: mt0-large_lora_627_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-large_lora_627_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-large](https://huggingface.co/bigscience/mt0-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5765 | 1.0 | 157 | 0.2644 |
| 0.1485 | 2.0 | 314 | 0.0381 |
| 0.0619 | 3.0 | 471 | 0.0112 |
| 0.0287 | 4.0 | 628 | 0.0017 |
| 0.0297 | 5.0 | 785 | 0.0015 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mdubiel/SpaceInvadersNoFrameskip-v4
|
mdubiel
| 2024-01-13T20:25:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T21:12:04Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 646.00 +/- 302.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mdubiel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mdubiel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mdubiel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
yashika0998/IoT-23-BERT-Network-Logs-Classification
|
yashika0998
| 2024-01-13T20:10:21Z | 215 | 1 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2023-12-01T17:58:45Z |
---
license: apache-2.0
inference: false
---
**Introduction:**
Exploring Language Models, we were captivated by an article on Intrusion Detection Systems and the IOT-23 dataset. This led us to explore how Language Models could enhance the prediction of malicious versus benign network logs, particularly in comparison to traditional methods like SVM and decision trees. The motivation behind our project was to address the balancing issue in existing methods and make our research accessible, ensuring accurate predictions in this critical task.
**Intrusion Detection System Development:**
Preprocessed the public IoT-23 dataset containing both benign and malicious traffic flows. Applied the SMOTE technique to oversample the minority benign class for balanced model training.
Uploaded final datasets to the Hugging Face Hub, focusing on classification columns for accessibility and reproducibility.
Mapped column features to sentences for Language Models, fine-tuning the uncased BERT model on encoded logs for robust classification.
Achieved impressive 96% test accuracy after 12 epochs on a Ryzen 5 CPU.
Saved the fine-tuned model in a cross-platform ONNX format for optimized deployment and future inference.
Developed an interactive Gradio interface for user log file uploads, evaluating the model in real time through captured zeek/pcap file log traffic.
Hosted the entire pipeline on Hugging Face Spaces for public availability and accessibility.
**Conclusion:**
Embarked on an NLP journey, showcasing the prowess it lends to IoT security. Anomaly detection is our key to thwarting attacks, and our open-source innovation beckons more minds to join the revolution
**Note!!**
For Infrence and try out model, please direct to spaces @ https://huggingface.co/spaces/yashika0998/IoT-23-BERT-Network-Logs-Classification
Example sentence to test the model inference: response port is 8081. transport protocol is tcp. connection state is S0. number of packets sent by the origin is 2. number of IP level bytes sent by the originator is 80. number of IP level bytes sent by the responder is 0
|
DrishtiSharma/llama2-7b-int4-dolly-15k-english-unsloth-w-packing
|
DrishtiSharma
| 2024-01-13T20:09:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"dataset:generator",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"license:llama2",
"region:us"
] | null | 2024-01-11T21:10:20Z |
---
license: llama2
library_name: peft
tags:
- trl
- sft
- unsloth
- unsloth
- generated_from_trainer
datasets:
- generator
base_model: unsloth/llama-2-7b
model-index:
- name: llama2-7b-int4-dolly-15k-english-unsloth-w-packing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-int4-dolly-15k-english-unsloth-w-packing
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2675 | 0.64 | 100 | 1.2318 |
| 1.1937 | 1.27 | 200 | 1.2221 |
| 1.1728 | 1.91 | 300 | 1.2178 |
| 1.1459 | 2.55 | 400 | 1.2198 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
birgermoell/gpt-sw3-6.7b-v2-instruct-4bit-gptq-slerp
|
birgermoell
| 2024-01-13T19:56:37Z | 0 | 0 | null |
[
"merge",
"mergekit",
"lazymergekit",
"AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq",
"berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | 2024-01-13T19:51:44Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq
- berkeley-nest/Starling-LM-7B-alpha
---
# gpt-sw3-6.7b-v2-instruct-4bit-gptq-slerp
gpt-sw3-6.7b-v2-instruct-4bit-gptq-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq
layer_range: [0, 32]
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 32]
merge_method: slerp
base_model: AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "birgermoell/gpt-sw3-6.7b-v2-instruct-4bit-gptq-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
malinda135/xlm-roberta-base-sinhala-hate-speech-cls
|
malinda135
| 2024-01-13T19:46:14Z | 95 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"si",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-13T18:38:35Z |
---
license: mit
language:
- si
metrics:
- accuracy
widget:
- text: අංගොඩ මානසික රෝහලට යමුද අක්කේ .පිස්සු ගෑනි
- text: අචිචිගේ රෙද්ද සංවෙිදි උබගේ සක්කිලි කම දැන් ටික ටික එලියට එනවා තෝ එන්න එපා මෙි ලංකාව තෝ ගහලා එලවනවා අයේ ඉතාලියට
- text: අඩෙ මාත් ඉන්නවා එලනෙ ඔයි
---
# Sinhala Hate Speech Classification Model
This model has trained on top of xlm-roberta-base to predict sinhala hate speech or not.
https://github.com/malindaashan/sinhala-hate-speech-classification/blob/main/Transformers_Hate_Speech_Classification.ipynb
|
Kooten/Kunoichi-DPO-v2-7B-6bpw-exl2
|
Kooten
| 2024-01-13T19:37:34Z | 45 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T18:46:07Z |
---
license: cc-by-nc-4.0
---
# Kunoichi-DPO-v2-7B 6bpw EXL2
## Description
Exllama quant of [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
## Other quants:
EXL2: [8bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-6bpw-exl2), [4bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-4bpw-exl2)
## Prompt format: Unsure
The previous version was alpaca
***Alpaca:***
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Contact
Kooten on discord
|
Kooten/Kunoichi-DPO-v2-7B-4bpw-exl2
|
Kooten
| 2024-01-13T19:35:52Z | 27 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T18:46:17Z |
---
license: cc-by-nc-4.0
---
# Kunoichi-DPO-v2-7B 4bpw EXL2
## Description
Exllama quant of [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
## Other quants:
EXL2: [8bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-6bpw-exl2), [4bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-4bpw-exl2)
## Prompt format: Unsure
The previous version was alpaca
***Alpaca:***
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Contact
Kooten on discord
|
Kooten/Kunoichi-DPO-v2-7B-8bpw-exl2
|
Kooten
| 2024-01-13T19:22:44Z | 359 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T18:45:58Z |
---
license: cc-by-nc-4.0
---
# Kunoichi-DPO-v2-7B 8bpw EXL2
## Description
Exllama quant of [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
## Other quants:
EXL2: [8bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-6bpw-exl2), [4bpw](https://huggingface.co/Kooten/Kunoichi-DPO-v2-7B-4bpw-exl2)
## Prompt format: Unsure
The previous version was alpaca
***Alpaca:***
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Contact
Kooten on discord
|
dharanaw5/FYP3
|
dharanaw5
| 2024-01-13T19:07:49Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T19:07:37Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Yamila/DialoGPT-small-jonesybot
|
Yamila
| 2024-01-13T18:56:21Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T18:46:44Z |
---
language:
- en
tags:
- conversational
---
|
thersfinefge/sfbdf
|
thersfinefge
| 2024-01-13T18:51:34Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-01-13T18:50:50Z |
---
license: bigscience-openrail-m
---
Numerous individuals grapple with a myriad of health challenges, including conditions such as discomfort in joints and muscles, disruptions in stress, anxiety, sleep, psychic health, and poor immunity. These issues can affect one's overall quality of life. Health experts assert that enhancing your well-being involves incorporating the proper nutrients into your diet, effectively managing psychic health, and maintaining a regular exercise routine.
https://www.mid-day.com/lifestyle/infotainment/article/bioheal-cbd-gummies-reviews-do-bioheal-blood-cbd-gummies-for-diabetes-really--23329774
|
JesuisMO76/01
|
JesuisMO76
| 2024-01-13T18:48:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-13T18:46:37Z |
le trophée de la coupe d Afrique des nations avec les animaux symboles de chaque pays
|
togethercomputer/m2-bert-80M-8k-retrieval
|
togethercomputer
| 2024-01-13T18:46:45Z | 242 | 33 |
transformers
|
[
"transformers",
"pytorch",
"m2_bert",
"text-classification",
"sentence-similarity",
"custom_code",
"en",
"arxiv:2310.12109",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-04T03:08:14Z |
---
license: apache-2.0
language:
- en
pipeline_tag: sentence-similarity
inference: false
---
# Monarch Mixer-BERT
An 80M checkpoint of M2-BERT, pretrained with sequence length 8192, and it has been fine-tuned for long-context retrieval.
Check out the paper [Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture](https://arxiv.org/abs/2310.12109) and our [blog post]() on retrieval for more on how we trained this model for long sequence.
This model was trained by Jon Saad-Falcon, Dan Fu, and Simran Arora.
Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it!
## How to use
You can load this model using Hugging Face `AutoModel`:
```python
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
"togethercomputer/m2-bert-80M-8k-retrieval",
trust_remote_code=True
)
```
You should expect to see a large error message about unused parameters for FlashFFTConv.
If you'd like to load the model with FlashFFTConv, you can check out our [GitHub](https://github.com/HazyResearch/m2/tree/main).
This model generates embeddings for retrieval. The embeddings have a dimensionality of 768:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
max_seq_length = 8192
testing_string = "Every morning, I make a cup of coffee to start my day."
model = AutoModelForSequenceClassification.from_pretrained(
"togethercomputer/m2-bert-80M-8k-retrieval",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
"bert-base-uncased",
model_max_length=max_seq_length
)
input_ids = tokenizer(
[testing_string],
return_tensors="pt",
padding="max_length",
return_token_type_ids=False,
truncation=True,
max_length=max_seq_length
)
outputs = model(**input_ids)
embeddings = outputs['sentence_embedding']
```
You can also get embeddings from this model using the Together API as follows (you can find your API key [here](https://api.together.xyz/settings/api-keys)):
```python
import os
import requests
def generate_together_embeddings(text: str, model_api_string: str, api_key: str):
url = "https://api.together.xyz/api/v1/embeddings"
headers = {
"accept": "application/json",
"content-type": "application/json",
"Authorization": f"Bearer {api_key}"
}
session = requests.Session()
response = session.post(
url,
headers=headers,
json={
"input": text,
"model": model_api_string
}
)
if response.status_code != 200:
raise ValueError(f"Request failed with status code {response.status_code}: {response.text}")
return response.json()['data'][0]['embedding']
print(generate_together_embeddings(
'Hello world',
'togethercomputer/m2-bert-80M-8k-retrieval',
os.environ['TOGETHER_API_KEY'])[:10]
)
```
## Acknowledgments
Alycia Lee helped with AutoModel support.
## Citation
If you use this model, or otherwise found our work valuable, you can cite us as follows:
```
@inproceedings{fu2023monarch,
title={Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture},
author={Fu, Daniel Y and Arora, Simran and Grogan, Jessica and Johnson, Isys and Eyuboglu, Sabri and Thomas, Armin W and Spector, Benjamin and Poli, Michael and Rudra, Atri and R{\'e}, Christopher},
booktitle={Advances in Neural Information Processing Systems},
year={2023}
}
```
|
jysssacc/opt-1.3b_adalora_627_lr5e-05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-13T18:43:43Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | 2024-01-13T18:39:12Z |
---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/opt-1.3b
model-index:
- name: opt-1.3b_adalora_627_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b_adalora_627_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5943 | 1.0 | 157 | 4.2292 |
| 4.1943 | 2.0 | 314 | 3.8077 |
| 3.8866 | 3.0 | 471 | 3.2970 |
| 3.2933 | 4.0 | 628 | 3.0521 |
| 3.1627 | 5.0 | 785 | 3.0160 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gustavokpc/IC_setimo
|
gustavokpc
| 2024-01-13T18:42:51Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-22T15:20:01Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/IC_setimo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/IC_setimo
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1339
- Train Accuracy: 0.9523
- Train F1 M: 0.5559
- Train Precision M: 0.4041
- Train Recall M: 0.9513
- Validation Loss: 0.2110
- Validation Accuracy: 0.9222
- Validation F1 M: 0.5681
- Validation Precision M: 0.4137
- Validation Recall M: 0.9574
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2274, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.4176 | 0.8080 | 0.4543 | 0.3667 | 0.6857 | 0.2600 | 0.8991 | 0.5567 | 0.4108 | 0.9084 | 0 |
| 0.2122 | 0.9203 | 0.5400 | 0.3991 | 0.8908 | 0.2049 | 0.9215 | 0.5529 | 0.4068 | 0.9089 | 1 |
| 0.1339 | 0.9523 | 0.5559 | 0.4041 | 0.9513 | 0.2110 | 0.9222 | 0.5681 | 0.4137 | 0.9574 | 2 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.10.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Elvira0/Fruit
|
Elvira0
| 2024-01-13T18:39:48Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-31T16:50:06Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
mus-shd/rl_course_vizdoom_health_gathering_supreme
|
mus-shd
| 2024-01-13T18:27:08Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T18:27:00Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.98 +/- 4.88
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r mus-shd/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ntc-ai/SDXL-LoRA-slider.silhouette
|
ntc-ai
| 2024-01-13T18:22:44Z | 235 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-13T18:22:41Z |
---
language:
- en
thumbnail: "images/evaluate/silhouette.../silhouette_17_3.0.png"
widget:
- text: silhouette
output:
url: images/silhouette_17_3.0.png
- text: silhouette
output:
url: images/silhouette_19_3.0.png
- text: silhouette
output:
url: images/silhouette_20_3.0.png
- text: silhouette
output:
url: images/silhouette_21_3.0.png
- text: silhouette
output:
url: images/silhouette_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "silhouette"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - silhouette (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/silhouette_17_-3.0.png" width=256 height=256 /> | <img src="images/silhouette_17_0.0.png" width=256 height=256 /> | <img src="images/silhouette_17_3.0.png" width=256 height=256 /> |
| <img src="images/silhouette_19_-3.0.png" width=256 height=256 /> | <img src="images/silhouette_19_0.0.png" width=256 height=256 /> | <img src="images/silhouette_19_3.0.png" width=256 height=256 /> |
| <img src="images/silhouette_20_-3.0.png" width=256 height=256 /> | <img src="images/silhouette_20_0.0.png" width=256 height=256 /> | <img src="images/silhouette_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
silhouette
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.silhouette', weight_name='silhouette.safetensors', adapter_name="silhouette")
# Activate the LoRA
pipe.set_adapters(["silhouette"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, silhouette"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1080+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
quantus17/rise4
|
quantus17
| 2024-01-13T18:17:35Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-13T17:58:37Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of e6z7a armchair
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
hanasim/breeze-dsw-tiny-id
|
hanasim
| 2024-01-13T18:14:29Z | 63 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-12T15:28:05Z |
---
language:
- id
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Breeze DSW Indonesian - tiny
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 id
type: mozilla-foundation/common_voice_16_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 43.44465912227436
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Breeze DSW Indonesian - tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_16_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7090
- Wer: 43.4447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.99 | 0.1 | 100 | 0.8486 | 54.2460 |
| 0.7896 | 1.04 | 200 | 0.7578 | 48.3899 |
| 0.4164 | 1.14 | 300 | 0.7388 | 49.3284 |
| 0.5456 | 2.09 | 400 | 0.7178 | 45.7954 |
| 0.4761 | 3.03 | 500 | 0.7109 | 45.2158 |
| 0.2674 | 3.13 | 600 | 0.7007 | 44.8431 |
| 0.3628 | 4.08 | 700 | 0.7026 | 44.2497 |
| 0.2565 | 5.02 | 800 | 0.7085 | 44.5073 |
| 0.2147 | 5.12 | 900 | 0.7090 | 43.4447 |
| 0.28 | 6.06 | 1000 | 0.7134 | 43.9553 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
hardikJ11/t5-small-finetuned-cnn-news
|
hardikJ11
| 2024-01-13T18:04:55Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-13T17:50:25Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 23.5402
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7209
- Rouge1: 23.5402
- Rouge2: 10.8834
- Rougel: 19.3936
- Rougelsum: 22.1513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.2531 | 1.0 | 718 | 2.6722 | 23.3437 | 10.5433 | 19.2183 | 21.8989 |
| 2.1518 | 2.0 | 1436 | 2.7024 | 23.4068 | 10.716 | 19.0751 | 21.9328 |
| 2.0925 | 3.0 | 2154 | 2.7235 | 23.232 | 10.5236 | 19.2254 | 21.8598 |
| 2.0808 | 4.0 | 2872 | 2.7309 | 23.7401 | 10.7664 | 19.4651 | 22.2479 |
| 2.1114 | 5.0 | 3590 | 2.7209 | 23.5402 | 10.8834 | 19.3936 | 22.1513 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mus-shd/ppo-unit8-LunarLander-v2
|
mus-shd
| 2024-01-13T17:47:01Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T17:41:34Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -95.54 +/- 59.68
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'mus-shd/ppo-unit8-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
jysssacc/opt-1.3b_lora_627_lr5e-05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-13T17:45:13Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:adapter:facebook/opt-1.3b",
"license:other",
"region:us"
] | null | 2024-01-13T17:43:34Z |
---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/opt-1.3b
model-index:
- name: opt-1.3b_lora_627_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b_lora_627_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6301 | 1.0 | 157 | 3.0802 |
| 3.1031 | 2.0 | 314 | 2.9470 |
| 3.0145 | 3.0 | 471 | 2.9682 |
| 2.9372 | 4.0 | 628 | 2.9680 |
| 2.8663 | 5.0 | 785 | 2.9576 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mchanakya/q-FrozenLake-v1-4x4-noSlippery
|
mchanakya
| 2024-01-13T17:27:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T17:27:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mchanakya/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gflexx/q-Taxi-v3
|
gflexx
| 2024-01-13T17:13:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T17:13:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="gflexx/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bmanobel/trained_bloomz_model
|
bmanobel
| 2024-01-13T17:08:56Z | 3 | 0 |
peft
|
[
"peft",
"base_model:bigscience/bloomz-3b",
"base_model:adapter:bigscience/bloomz-3b",
"region:us"
] | null | 2023-06-28T09:10:59Z |
---
library_name: peft
base_model: bigscience/bloomz-3b
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
bmanobel/falcon7b-kolt_translator_10k_800steps_5epoch
|
bmanobel
| 2024-01-13T17:08:35Z | 5 | 0 |
peft
|
[
"peft",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"region:us"
] | null | 2023-06-29T06:00:03Z |
---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
exyl-drl-learn/ppo-Pyramids
|
exyl-drl-learn
| 2024-01-13T17:02:41Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-01-13T17:02:35Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: exyl-drl-learn/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
v3ucn/wizard3
|
v3ucn
| 2024-01-13T17:00:25Z | 2 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-01-13T16:38:48Z |
游戏巫师3角色的语音模型
叶内法,卡拉梅兹
基于Bert-vits2-Extra 中文特化分支
|
rpayanm/stable-ts
|
rpayanm
| 2024-01-13T16:55:55Z | 0 | 0 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2024-01-13T15:46:55Z |
---
license: mit
language:
- en
---
# Stabilizing Timestamps for Whisper
This library modifies [Whisper](https://github.com/openai/whisper) to produce more reliable timestamps and extends its functionality.
https://github.com/jianfch/stable-ts/assets/28970749/7adf0540-3620-4b2b-b2d4-e316906d6dfa
* [Setup](#setup)
* [Usage](#usage)
* [Transcribe](#transcribe)
* [Output](#output)
* [Alignment](#alignment)
* [Adjustments](#adjustments)
* [Refinement](#refinement)
* [Regrouping Words](#regrouping-words)
* [Editing](#editing)
* [Locating Words](#locating-words)
* [Silence Suppression](#silence-suppression)
* [Tips](#tips)
* [Visualizing Suppression](#visualizing-suppression)
* [Encode Comparison](#encode-comparison)
* [Use with any ASR](#any-asr)
* [Quick 1.X → 2.X Guide](#quick-1x--2x-guide)
## Setup
```
pip install -U stable-ts
```
To install the latest commit:
```
pip install -U git+https://github.com/jianfch/stable-ts.git
```
## Usage
### Transcribe
```python
import stable_whisper
model = stable_whisper.load_model('base')
result = model.transcribe('audio.mp3')
result.to_srt_vtt('audio.srt')
```
<details>
<summary>CLI</summary>
```commandline
stable-ts audio.mp3 -o audio.srt
```
</details>
Docstrings:
<details>
<summary>load_model()</summary>
Load an instance if :class:`whisper.model.Whisper`.
Parameters
----------
name : {'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1',
'large-v2', 'large-v3', or 'large'}
One of the official model names listed by :func:`whisper.available_models`, or
path to a model checkpoint containing the model dimensions and the model state_dict.
device : str or torch.device, optional
PyTorch device to put the model into.
download_root : str, optional
Path to download the model files; by default, it uses "~/.cache/whisper".
in_memory : bool, default False
Whether to preload the model weights into host memory.
cpu_preload : bool, default True
Load model into CPU memory first then move model to specified device
to reduce GPU memory usage when loading model
dq : bool, default False
Whether to apply Dynamic Quantization to model to reduced memory usage and increase inference speed
but at the cost of a slight decrease in accuracy. Only for CPU.
Returns
-------
model : "Whisper"
The Whisper ASR model instance.
Notes
-----
The overhead from ``dq = True`` might make inference slower for models smaller than 'large'.
</details>
<details>
<summary>transcribe()</summary>
Transcribe audio using Whisper.
This is a modified version of :func:`whisper.transcribe.transcribe` with slightly different decoding logic while
allowing additional preprocessing and postprocessing. The preprocessing performed on the audio includes: isolating
voice / removing noise with Demucs and low/high-pass filter. The postprocessing performed on the transcription
result includes: adjusting timestamps with VAD and custom regrouping segments based punctuation and speech gaps.
Parameters
----------
model : whisper.model.Whisper
An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes
Path/URL to the audio file, the audio waveform, or bytes of audio file.
If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
Whether to display the text being decoded to the console.
Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
temperature : float or iterable of float, default (0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
Temperature for sampling. It can be a tuple of temperatures, which will be successfully used
upon failures according to either ``compression_ratio_threshold`` or ``logprob_threshold``.
compression_ratio_threshold : float, default 2.4
If the gzip compression ratio is above this value, treat as failed.
logprob_threshold : float, default -1
If the average log probability over sampled tokens is below this value, treat as failed
no_speech_threshold : float, default 0.6
If the no_speech probability is higher than this value AND the average log probability
over sampled tokens is below ``logprob_threshold``, consider the segment as silent
condition_on_previous_text : bool, default True
If ``True``, the previous output of the model is provided as a prompt for the next window;
disabling may make the text inconsistent across windows, but the model becomes less prone to
getting stuck in a failure loop, such as repetition looping or timestamps going out of sync.
initial_prompt : str, optional
Text to provide as a prompt for the first window. This can be used to provide, or
"prompt-engineer" a context for transcription, e.g. custom vocabularies or proper nouns
to make it more likely to predict those word correctly.
word_timestamps : bool, default True
Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
and include the timestamps for each word in each segment.
Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
String for customizing the regrouping algorithm. False disables regrouping.
Ignored if ``word_timestamps = False``.
ts_num : int, default 0, meaning disable this option
Number of extra timestamp inferences to perform then use average of these extra timestamps.
An experimental option that might hurt performance.
ts_noise : float, default 0.1
Percentage of noise to add to audio_features to perform inferences for ``ts_num``.
suppress_silence : bool, default True
Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
Acts as a threshold to marking sound as silent.
Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
Recommend 5 or 3; higher sizes will reduce detection of silence.
time_scale : float, optional
Factor for scaling audio duration for inference.
Greater than 1.0 'slows down' the audio, and less than 1.0 'speeds up' the audio. None is same as 1.0.
A factor of 1.5 will stretch 10s audio to 15s for inference. This increases the effective resolution
of the model but can increase word error rate.
demucs : bool or torch.nn.Module, default False
Whether to preprocess ``audio`` with Demucs to isolate vocals / remove noise. Set ``demucs`` to an instance of
a Demucs model to avoid reloading the model for each run.
Demucs must be installed to use. Official repo. https://github.com/facebookresearch/demucs.
demucs_output : str, optional
Path to save the vocals isolated by Demucs as WAV file. Ignored if ``demucs = False``.
Demucs must be installed to use. Official repo. https://github.com/facebookresearch/demucs.
demucs_options : dict, optional
Options to use for :func:`stable_whisper.audio.demucs_audio`.
vad : bool, default False
Whether to use Silero VAD to generate timestamp suppression mask.
Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
vad_onnx : bool, default False
Whether to use ONNX for Silero VAD.
min_word_dur : float, default 0.1
Shortest duration each word is allowed to reach for silence suppression.
nonspeech_error : float, default 0.3
Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
prepend_punctuations : str, default '"\'“¿([{-)'
Punctuations to prepend to next word.
append_punctuations : str, default '.。,,!!??::”)]}、)'
Punctuations to append to previous word.
mel_first : bool, default False
Process entire audio track into log-Mel spectrogram first instead in chunks.
Used if odd behavior seen in stable-ts but not in whisper, but use significantly more memory for long audio.
split_callback : Callable, optional
Custom callback for grouping tokens up with their corresponding words.
The callback must take two arguments, list of tokens and tokenizer.
The callback returns a tuple with a list of words and a corresponding nested list of tokens.
suppress_ts_tokens : bool, default False
Whether to suppress timestamp tokens during inference for timestamps are detected at silent.
Reduces hallucinations in some cases, but also prone to ignore disfluencies and repetitions.
This option is ignored if ``suppress_silence = False``.
gap_padding : str, default ' ...'
Padding prepend to each segments for word timing alignment.
Used to reduce the probability of model predicting timestamps earlier than the first utterance.
only_ffmpeg : bool, default False
Whether to use only FFmpeg (instead of not yt-dlp) for URls
max_instant_words : float, default 0.5
If percentage of instantaneous words in a segment exceed this amount, the segment is removed.
avg_prob_threshold: float or None, default None
Transcribe the gap after the previous word and if the average word proababiliy of a segment falls below this
value, discard the segment. If ``None``, skip transcribing the gap to reduce chance of timestamps starting
before the next utterance.
progress_callback : Callable, optional
A function that will be called when transcription progress is updated.
The callback need two parameters.
The first parameter is a float for seconds of the audio that has been transcribed.
The second parameter is a float for total duration of audio in seconds.
ignore_compatibility : bool, default False
Whether to ignore warnings for compatibility issues with the detected Whisper version.
decode_options
Keyword arguments to construct class:`whisper.decode.DecodingOptions` instances.
Returns
-------
stable_whisper.result.WhisperResult
All timestamps, words, probabilities, and other data from the transcription of ``audio``.
See Also
--------
stable_whisper.non_whisper.transcribe_any : Return :class:`stable_whisper.result.WhisperResult` containing all the
data from transcribing audio with unmodified :func:`whisper.transcribe.transcribe` with preprocessing and
postprocessing.
stable_whisper.whisper_word_level.load_faster_whisper.faster_transcribe : Return
:class:`stable_whisper.result.WhisperResult` containing all the data from transcribing audio with
:meth:`faster_whisper.WhisperModel.transcribe` with preprocessing and postprocessing.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
</details>
<details>
<summary>transcribe_minimal()</summary>
Transcribe audio using Whisper.
This is uses the original whisper transcribe function, :func:`whisper.transcribe.transcribe`, while still allowing
additional preprocessing and postprocessing. The preprocessing performed on the audio includes: isolating voice /
removing noise with Demucs and low/high-pass filter. The postprocessing performed on the transcription
result includes: adjusting timestamps with VAD and custom regrouping segments based punctuation and speech gaps.
Parameters
----------
model : whisper.model.Whisper
An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes
Path/URL to the audio file, the audio waveform, or bytes of audio file.
If audio is ``numpy.ndarray`` or ``torch.Tensor``, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
Whether to display the text being decoded to the console.
Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
word_timestamps : bool, default True
Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
and include the timestamps for each word in each segment.
Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
String for customizing the regrouping algorithm. False disables regrouping.
Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
Acts as a threshold to marking sound as silent.
Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
Recommend 5 or 3; higher sizes will reduce detection of silence.
demucs : bool or torch.nn.Module, default False
Whether to preprocess ``audio`` with Demucs to isolate vocals / remove noise. Set ``demucs`` to an instance of
a Demucs model to avoid reloading the model for each run.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_output : str, optional
Path to save the vocals isolated by Demucs as WAV file. Ignored if ``demucs = False``.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_options : dict, optional
Options to use for :func:`stable_whisper.audio.demucs_audio`.
vad : bool, default False
Whether to use Silero VAD to generate timestamp suppression mask.
Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
vad_onnx : bool, default False
Whether to use ONNX for Silero VAD.
min_word_dur : float, default 0.1
Shortest duration each word is allowed to reach for silence suppression.
nonspeech_error : float, default 0.3
Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
only_ffmpeg : bool, default False
Whether to use only FFmpeg (instead of not yt-dlp) for URls
options
Additional options used for :func:`whisper.transcribe.transcribe` and
:func:`stable_whisper.non_whisper.transcribe_any`.
Returns
-------
stable_whisper.result.WhisperResult
All timestamps, words, probabilities, and other data from the transcription of ``audio``.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe_minimal('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
</details>
<br>
<details>
<summary>faster-whisper</summary>
Use with [faster-whisper](https://github.com/guillaumekln/faster-whisper):
```python
model = stable_whisper.load_faster_whisper('base')
result = model.transcribe_stable('audio.mp3')
```
```commandline
stable-ts audio.mp3 -o audio.srt -fw
```
Docstring:
<details>
<summary>load_faster_whisper()</summary>
Load an instance of :class:`faster_whisper.WhisperModel`.
Parameters
----------
model_size_or_path : {'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1',
'large-v2', 'large-v3', or 'large'}
Size of the model.
model_init_options
Additional options to use for initialization of :class:`faster_whisper.WhisperModel`.
Returns
-------
faster_whisper.WhisperModel
A modified instance with :func:`stable_whisper.whisper_word_level.load_faster_whisper.faster_transcribe`
assigned to :meth:`faster_whisper.WhisperModel.transcribe_stable`.
</details>
<details>
<summary>transcribe_stable()</summary>
Transcribe audio using faster-whisper (https://github.com/guillaumekln/faster-whisper).
This is uses the transcribe method from faster-whisper, :meth:`faster_whisper.WhisperModel.transcribe`, while
still allowing additional preprocessing and postprocessing. The preprocessing performed on the audio includes:
isolating voice / removing noise with Demucs and low/high-pass filter. The postprocessing performed on the
transcription result includes: adjusting timestamps with VAD and custom regrouping segments based punctuation
and speech gaps.
Parameters
----------
model : faster_whisper.WhisperModel
The faster-whisper ASR model instance.
audio : str or numpy.ndarray or torch.Tensor or bytes
Path/URL to the audio file, the audio waveform, or bytes of audio file.
If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
Whether to display the text being decoded to the console.
Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
word_timestamps : bool, default True
Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
and include the timestamps for each word in each segment.
Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
String for customizing the regrouping algorithm. False disables regrouping.
Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
Acts as a threshold to marking sound as silent.
Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
Recommend 5 or 3; higher sizes will reduce detection of silence.
demucs : bool or torch.nn.Module, default False
Whether to preprocess ``audio`` with Demucs to isolate vocals / remove noise. Set ``demucs`` to an instance
of a Demucs model to avoid reloading the model for each run.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_output : str, optional
Path to save the vocals isolated by Demucs as WAV file. Ignored if ``demucs = False``.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_options : dict, optional
Options to use for :func:`stable_whisper.audio.demucs_audio`.
vad : bool, default False
Whether to use Silero VAD to generate timestamp suppression mask.
Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
vad_onnx : bool, default False
Whether to use ONNX for Silero VAD.
min_word_dur : float, default 0.1
Shortest duration each word is allowed to reach for silence suppression.
nonspeech_error : float, default 0.3
Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
only_ffmpeg : bool, default False
Whether to use only FFmpeg (instead of not yt-dlp) for URls
check_sorted : bool, default True
Whether to raise an error when timestamps returned by faster-whipser are not in ascending order.
progress_callback : Callable, optional
A function that will be called when transcription progress is updated.
The callback need two parameters.
The first parameter is a float for seconds of the audio that has been transcribed.
The second parameter is a float for total duration of audio in seconds.
options
Additional options used for :meth:`faster_whisper.WhisperModel.transcribe` and
:func:`stable_whisper.non_whisper.transcribe_any`.
Returns
-------
stable_whisper.result.WhisperResult
All timestamps, words, probabilities, and other data from the transcription of ``audio``.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_faster_whisper('base')
>>> result = model.transcribe_stable('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
</details>
</details>
### Output
Stable-ts supports various text output formats.
```python
result.to_srt_vtt('audio.srt') #SRT
result.to_srt_vtt('audio.vtt') #VTT
result.to_ass('audio.ass') #ASS
result.to_tsv('audio.tsv') #TSV
```
Docstrings:
<details>
<summary>result_to_srt_vtt()</summary>
Generate SRT/VTT from ``result`` to display segment-level and/or word-level timestamp.
Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
Path to save file.
segment_level : bool, default True
Whether to use segment-level timestamps in output.
word_level : bool, default True
Whether to use word-level timestamps in output.
min_dur : float, default 0.2
Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
tag: tuple of (str, str), default None, meaning ('<font color="#00ff00">', '</font>') if SRT else ('<u>', '</u>')
Tag used to change the properties a word at its timestamp.
vtt : bool, default None, meaning determined by extension of ``filepath`` or ``False`` if no valid extension.
Whether to output VTT.
strip : bool, default True
Whether to remove spaces before and after text on each segment for output.
reverse_text: bool or tuple, default False
Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.
Returns
-------
str
String of the content if ``filepath`` is ``None``.
Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
</details>
<details>
<summary>result_to_ass()</summary>
Generate Advanced SubStation Alpha (ASS) file from ``result`` to display segment-level and/or word-level timestamp.
Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
Path to save file.
segment_level : bool, default True
Whether to use segment-level timestamps in output.
word_level : bool, default True
Whether to use word-level timestamps in output.
min_dur : float, default 0.2
Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
tag: tuple of (str, str) or int, default None, meaning use default highlighting
Tag used to change the properties a word at its timestamp. -1 for individual word highlight tag.
font : str, default `Arial`
Word font.
font_size : int, default 48
Word font size.
strip : bool, default True
Whether to remove spaces before and after text on each segment for output.
highlight_color : str, default '00ff00'
Hexadecimal of the color use for default highlights as '<bb><gg><rr>'.
karaoke : bool, default False
Whether to use progressive filling highlights (for karaoke effect).
reverse_text: bool or tuple, default False
Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.
kwargs:
Format styles:
'Name', 'Fontname', 'Fontsize', 'PrimaryColour', 'SecondaryColour', 'OutlineColour', 'BackColour', 'Bold',
'Italic', 'Underline', 'StrikeOut', 'ScaleX', 'ScaleY', 'Spacing', 'Angle', 'BorderStyle', 'Outline',
'Shadow', 'Alignment', 'MarginL', 'MarginR', 'MarginV', 'Encoding'
Returns
-------
str
String of the content if ``filepath`` is ``None``.
Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_ass('audio.ass')
Saved: audio.ass
</details>
<details>
<summary>result_to_tsv()</summary>
Generate TSV from ``result`` to display segment-level and/or word-level timestamp.
Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
Path to save file.
segment_level : bool, default True
Whether to use segment-level timestamps in output.
word_level : bool, default True
Whether to use word-level timestamps in output.
min_dur : float, default 0.2
Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
strip : bool, default True
Whether to remove spaces before and after text on each segment for output.
reverse_text: bool or tuple, default False
Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.
Returns
-------
str
String of the content if ``filepath`` is ``None``.
Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_tsv('audio.tsv')
Saved: audio.tsv
</details>
<details>
<summary>result_to_txt()</summary>
Generate plain-text without timestamps from ``result``.
Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
Result of transcription.
filepath : str, default None, meaning content will be returned as a ``str``
Path to save file.
min_dur : float, default 0.2
Minimum duration allowed for any word/segment before the word/segments are merged with adjacent word/segments.
strip : bool, default True
Whether to remove spaces before and after text on each segment for output.
reverse_text: bool or tuple, default False
Whether to reverse the order of words for each segment or provide the ``prepend_punctuations`` and
``append_punctuations`` as tuple pair instead of ``True`` which is for the default punctuations.
Returns
-------
str
String of the content if ``filepath`` is ``None``.
Notes
-----
``reverse_text`` will not fix RTL text not displaying tags properly which is an issue with some video player. VLC
seems to not suffer from this issue.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.to_txt('audio.txt')
Saved: audio.txt
</details>
<details>
<summary>save_as_json()</summary>
Save ``result`` as JSON file to ``path``.
Parameters
----------
result : dict or list or stable_whisper.result.WhisperResult
Result of transcription.
path : str
Path to save file.
ensure_ascii : bool, default False
Whether to escape non-ASCII characters.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> result.save_as_json('audio.json')
Saved: audio.json
</details>
<br /><br />
There are word-level and segment-level timestamps. All output formats support them.
They also support will both levels simultaneously except TSV.
By default, `segment_level` and `word_level` are both `True` for all the formats that support both simultaneously.<br /><br />
Examples in VTT.
Default: `segment_level=True` + `word_level=True`
<details>
<summary>CLI</summary>
`--segment_level true` + `--word_level true`
</details>
```
00:00:07.760 --> 00:00:09.900
But<00:00:07.860> when<00:00:08.040> you<00:00:08.280> arrived<00:00:08.580> at<00:00:08.800> that<00:00:09.000> distant<00:00:09.400> world,
```
`segment_level=True` + `word_level=False`
```
00:00:07.760 --> 00:00:09.900
But when you arrived at that distant world,
```
`segment_level=False` + `word_level=True`
```
00:00:07.760 --> 00:00:07.860
But
00:00:07.860 --> 00:00:08.040
when
00:00:08.040 --> 00:00:08.280
you
00:00:08.280 --> 00:00:08.580
arrived
...
```
#### JSON
The result can also be saved as a JSON file to preserve all the data for future reprocessing.
This is useful for testing different sets of postprocessing arguments without the need to redo inference.
```python
result.save_as_json('audio.json')
```
<details>
<summary>CLI</summary>
```commandline
stable-ts audio.mp3 -o audio.json
```
</details>
Processing JSON file of the results into SRT.
```python
result = stable_whisper.WhisperResult('audio.json')
result.to_srt_vtt('audio.srt')
```
<details>
<summary>CLI</summary>
```commandline
stable-ts audio.json -o audio.srt
```
</details>
### Alignment
Audio can be aligned/synced with plain text on word-level.
```python
text = 'Machines thinking, breeding. You were to bear us a new, promised land.'
result = model.align('audio.mp3', text, language='en')
```
When the text is correct but the timestamps need more work,
`align()` is a faster alternative for testing various settings/models.
```python
new_result = model.align('audio.mp3', result, language='en')
```
<details>
<summary>CLI</summary>
```commandline
stable-ts audio.mp3 --align text.txt --language en
```
`--align` can also a JSON file of a result
</details>
Docstring:
<details>
<summary>align()</summary>
Align plain text or tokens with audio at word-level.
Since this is significantly faster than transcribing, it is a more efficient method for testing various settings
without re-transcribing. This is also useful for timing a more correct transcript than one that Whisper can produce.
Parameters
----------
model : "Whisper"
The Whisper ASR model modified instance
audio : str or numpy.ndarray or torch.Tensor or bytes
Path/URL to the audio file, the audio waveform, or bytes of audio file.
If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
text : str or list of int or stable_whisper.result.WhisperResult
String of plain-text, list of tokens, or instance of :class:`stable_whisper.result.WhisperResult`.
language : str, default None, uses ``language`` in ``text`` if it is a :class:`stable_whisper.result.WhisperResult`
Language of ``text``. Required if ``text`` does not contain ``language``.
remove_instant_words : bool, default False
Whether to truncate any words with zero duration.
token_step : int, default 100
Max number of tokens to align each pass. Use higher values to reduce chance of misalignment.
original_split : bool, default False
Whether to preserve the original segment groupings. Segments are spit by line break if ``text`` is plain-text.
max_word_dur : float or None, default 3.0
Global maximum word duration in seconds. Re-align words that exceed the global maximum word duration.
word_dur_factor : float or None, default 2.0
Factor to compute the Local maximum word duration, which is ``word_dur_factor`` * local medium word duration.
Words that need re-alignment, are re-algined with duration <= local/global maximum word duration.
nonspeech_skip : float or None, default 3.0
Skip non-speech sections that are equal or longer than this duration in seconds. Disable skipping if ``None``.
fast_mode : bool, default False
Whether to speed up alignment by re-alignment with local/global maximum word duration.
``True`` tends produce better timestamps when ``text`` is accurate and there are no large speechless gaps.
tokenizer : "Tokenizer", default None, meaning a new tokenizer is created according ``language`` and ``model``
A tokenizer to used tokenizer text and detokenize tokens.
verbose : bool or None, default False
Whether to display the text being decoded to the console.
Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
regroup : bool or str, default True, meaning the default regroup algorithm
String for customizing the regrouping algorithm. False disables regrouping.
Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
Acts as a threshold to marking sound as silent.
Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
Recommend 5 or 3; higher sizes will reduce detection of silence.
demucs : bool or torch.nn.Module, default False
Whether to preprocess ``audio`` with Demucs to isolate vocals / remove noise. Set ``demucs`` to an instance of
a Demucs model to avoid reloading the model for each run.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_output : str, optional
Path to save the vocals isolated by Demucs as WAV file. Ignored if ``demucs = False``.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_options : dict, optional
Options to use for :func:`stable_whisper.audio.demucs_audio`.
vad : bool, default False
Whether to use Silero VAD to generate timestamp suppression mask.
Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
vad_onnx : bool, default False
Whether to use ONNX for Silero VAD.
min_word_dur : float, default 0.1
Shortest duration each word is allowed to reach for silence suppression.
nonspeech_error : float, default 0.3
Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
prepend_punctuations : str, default '"'“¿([{-)'
Punctuations to prepend to next word.
append_punctuations : str, default '.。,,!!??::”)]}、)'
Punctuations to append to previous word.
progress_callback : Callable, optional
A function that will be called when transcription progress is updated.
The callback need two parameters.
The first parameter is a float for seconds of the audio that has been transcribed.
The second parameter is a float for total duration of audio in seconds.
ignore_compatibility : bool, default False
Whether to ignore warnings for compatibility issues with the detected Whisper version.
Returns
-------
stable_whisper.result.WhisperResult or None
All timestamps, words, probabilities, and other data from the alignment of ``audio``. Return None if alignment
fails and ``remove_instant_words = True``.
Notes
-----
If ``token_step`` is less than 1, ``token_step`` will be set to its maximum value, 442. This value is computed with
``whisper.model.Whisper.dims.n_text_ctx`` - 6.
IF ``original_split = True`` and a line break is found in middle of a word in ``text``, the split will occur after
that word.
``regroup`` is ignored if ``original_split = True``.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.align('helloworld.mp3', 'Hello, World!', 'English')
>>> result.to_srt_vtt('helloword.srt')
Saved 'helloworld.srt'
</details>
#### Adjustments
Timestamps are adjusted after the model predicts them.
When `suppress_silence=True` (default), `transcribe()`/`transcribe_minimal()`/`align()` adjust based on silence/non-speech.
The timestamps can be further adjusted base on another result with `adjust_by_result()`,
which acts as a logical AND operation for the timestamps of both results, further reducing duration of each word.
Note: both results are required to have word timestamps and matching words.
```python
# the adjustments are in-place for `result`
result.adjust_by_result(new_result)
```
Docstring:
<details>
<summary>adjust_by_result()</summary>
Minimize the duration of words using timestamps of another result.
Parameters
----------
other_result : "WhisperResult"
Timing data of the same words in a WhisperResult instance.
min_word_dur : float, default 0.1
Prevent changes to timestamps if the resultant word duration is less than ``min_word_dur``.
verbose : bool, default False
Whether to print out the timestamp changes.
</details>
### Refinement
Timestamps can be further improved with `refine()`.
This method iteratively mutes portions of the audio based on current timestamps
then compute the probabilities of the tokens.
Then by monitoring the fluctuation of the probabilities, it tries to find the most precise timestamps.
"Most precise" in this case means the latest start and earliest end for the word
such that it still meets the specified conditions.
```python
model.refine('audio.mp3', result)
```
<details>
<summary>CLI</summary>
```commandline
stable-ts audio.mp3 --refine -o audio.srt
```
Input can also be JSON file of a result.
```commandline
stable-ts result.json --refine -o audio.srt --refine_option "audio=audio.mp3"
```
</details>
Docstring:
<details>
<summary>refine()</summary>
Improve existing timestamps.
This function iteratively muting portions of the audio and monitoring token probabilities to find the most precise
timestamps. This "most precise" in this case means the latest start and earliest end of a word that maintains an
acceptable probability determined by the specified arguments.
This is useful readjusting timestamps when they start too early or end too late.
Parameters
----------
model : "Whisper"
The Whisper ASR model modified instance
audio : str or numpy.ndarray or torch.Tensor or bytes
Path/URL to the audio file, the audio waveform, or bytes of audio file.
If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
result : stable_whisper.result.WhisperResult
All timestamps, words, probabilities, and other data from the transcription of ``audio``.
steps : str, default 'se'
Instructions for refinement. A 's' means refine start-timestamps. An 'e' means refine end-timestamps.
rel_prob_decrease : float, default 0.3
Maximum percent decrease in probability relative to original probability which is the probability from muting
according initial timestamps.
abs_prob_decrease : float, default 0.05
Maximum decrease in probability from original probability.
rel_rel_prob_decrease : float, optional
Maximum percent decrease in probability relative to previous probability which is the probability from previous
iteration of muting.
prob_threshold : float, default 0.5
Stop refining the timestamp if the probability of its token goes below this value.
rel_dur_change : float, default 0.5
Maximum percent change in duration of a word relative to its original duration.
abs_dur_change : float, optional
Maximum seconds a word is allowed deviate from its original duration.
word_level : bool, default True
Whether to refine timestamps on word-level. If ``False``, only refine start/end timestamps of each segment.
precision : float, default 0.1
Precision of refined timestamps in seconds. The lowest precision is 0.02 second.
single_batch : bool, default False
Whether to process in only batch size of one to reduce memory usage.
inplace : bool, default True, meaning return a deepcopy of ``result``
Whether to alter timestamps in-place.
demucs : bool or torch.nn.Module, default False
Whether to preprocess ``audio`` with Demucs to isolate vocals / remove noise. Set ``demucs`` to an instance of
a Demucs model to avoid reloading the model for each run.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_options : dict, optional
Options to use for :func:`stable_whisper.audio.demucs_audio`.
only_voice_freq : bool, default False
Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
verbose : bool or None, default False
Whether to display the text being decoded to the console.
Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
Returns
-------
stable_whisper.result.WhisperResult
All timestamps, words, probabilities, and other data from the refinement of ``text`` with ``audio``.
Notes
-----
The lower the ``precision``, the longer the processing time.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3')
>>> model.refine('audio.mp3', result)
>>> result.to_srt_vtt('audio.srt')
Saved 'audio.srt'
</details>
### Regrouping Words
Stable-ts has a preset for regrouping words into different segments with more natural boundaries.
This preset is enabled by `regroup=True` (default).
But there are other built-in [regrouping methods](#regrouping-methods) that allow you to customize the regrouping algorithm.
This preset is just a predefined combination of those methods.
https://github.com/jianfch/stable-ts/assets/28970749/7b6164a3-50e2-4368-8b75-853cb14045ec
```python
# The following results are all functionally equivalent:
result0 = model.transcribe('audio.mp3', regroup=True) # regroup is True by default
result1 = model.transcribe('audio.mp3', regroup=False)
(
result1
.clamp_max()
.split_by_punctuation([('.', ' '), '。', '?', '?', (',', ' '), ','])
.split_by_gap(.5)
.merge_by_gap(.3, max_words=3)
.split_by_punctuation([('.', ' '), '。', '?', '?'])
)
result2 = model.transcribe('audio.mp3', regroup='cm_sp=.* /。/?/?/,* /,_sg=.5_mg=.3+3_sp=.* /。/?/?')
# To undo all regrouping operations:
result0.reset()
```
Any regrouping algorithm can be expressed as a string. Please feel free share your strings [here](https://github.com/jianfch/stable-ts/discussions/162)
#### Regrouping Methods
<details>
<summary>regroup()</summary>
Regroup (in-place) words into segments.
Parameters
----------
regroup_algo: str or bool, default 'da'
String representation of a custom regrouping algorithm or ``True`` use to the default algorithm 'da'.
verbose : bool, default False
Whether to show all the methods and arguments parsed from ``regroup_algo``.
only_show : bool, default False
Whether to show the all methods and arguments parsed from ``regroup_algo`` without running the methods
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
Notes
-----
Syntax for string representation of custom regrouping algorithm.
Method keys:
sg: split_by_gap
sp: split_by_punctuation
sl: split_by_length
sd: split_by_duration
mg: merge_by_gap
mp: merge_by_punctuation
ms: merge_all_segment
cm: clamp_max
l: lock
us: unlock_all_segments
da: default algorithm (cm_sp=.* /。/?/?/,* /,_sg=.5_mg=.3+3_sp=.* /。/?/?)
rw: remove_word
rs: remove_segment
rp: remove_repetition
rws: remove_words_by_str
fg: fill_in_gaps
Metacharacters:
= separates a method key and its arguments (not used if no argument)
_ separates method keys (after arguments if there are any)
+ separates arguments for a method key
/ separates an argument into list of strings
* separates an item in list of strings into a nested list of strings
Notes:
-arguments are parsed positionally
-if no argument is provided, the default ones will be used
-use 1 or 0 to represent True or False
Example 1:
merge_by_gap(.2, 10, lock=True)
mg=.2+10+++1
Note: [lock] is the 5th argument hence the 2 missing arguments inbetween the three + before 1
Example 2:
split_by_punctuation([('.', ' '), '。', '?', '?'], True)
sp=.* /。/?/?+1
Example 3:
merge_all_segments().split_by_gap(.5).merge_by_gap(.15, 3)
ms_sg=.5_mg=.15+3
</details>
<details>
<summary>split_by_gap()</summary>
Split (in-place) any segment where the gap between two of its words is greater than ``max_gap``.
Parameters
----------
max_gap : float, default 0.1
Maximum second(s) allowed between two words if the same segment.
lock : bool, default False
Whether to prevent future splits/merges from altering changes made by this method.
newline: bool, default False
Whether to insert line break at the split points instead of splitting into separate segments.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
<details>
<summary>split_by_punctuation()</summary>
Split (in-place) segments at words that start/end with ``punctuation``.
Parameters
----------
punctuation : list of str of list of tuple of (str, str) or str
Punctuation(s) to split segments by.
lock : bool, default False
Whether to prevent future splits/merges from altering changes made by this method.
newline : bool, default False
Whether to insert line break at the split points instead of splitting into separate segments.
min_words : int, optional
Split segments with words >= ``min_words``.
min_chars : int, optional
Split segments with characters >= ``min_chars``.
min_dur : int, optional
split segments with duration (in seconds) >= ``min_dur``.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
<details>
<summary>split_by_length()</summary>
Split (in-place) any segment that exceeds ``max_chars`` or ``max_words`` into smaller segments.
Parameters
----------
max_chars : int, optional
Maximum number of characters allowed in each segment.
max_words : int, optional
Maximum number of words allowed in each segment.
even_split : bool, default True
Whether to evenly split a segment in length if it exceeds ``max_chars`` or ``max_words``.
force_len : bool, default False
Whether to force a constant length for each segment except the last segment.
This will ignore all previous non-locked segment boundaries.
lock : bool, default False
Whether to prevent future splits/merges from altering changes made by this method.
include_lock: bool, default False
Whether to include previous lock before splitting based on max_words, if ``even_split = False``.
Splitting will be done after the first non-locked word > ``max_chars`` / ``max_words``.
newline: bool, default False
Whether to insert line break at the split points instead of splitting into separate segments.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
Notes
-----
If ``even_split = True``, segments can still exceed ``max_chars`` and locked words will be ignored to avoid
uneven splitting.
</details>
<details>
<summary>split_by_duration()</summary>
Split (in-place) any segment that exceeds ``max_dur`` into smaller segments.
Parameters
----------
max_dur : float
Maximum duration (in seconds) per segment.
even_split : bool, default True
Whether to evenly split a segment in length if it exceeds ``max_dur``.
force_len : bool, default False
Whether to force a constant length for each segment except the last segment.
This will ignore all previous non-locked segment boundaries.
lock : bool, default False
Whether to prevent future splits/merges from altering changes made by this method.
include_lock: bool, default False
Whether to include previous lock before splitting based on max_words, if ``even_split = False``.
Splitting will be done after the first non-locked word > ``max_dur``.
newline: bool, default False
Whether to insert line break at the split points instead of splitting into separate segments.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
Notes
-----
If ``even_split = True``, segments can still exceed ``max_dur`` and locked words will be ignored to avoid
uneven splitting.
</details>
<details>
<summary>merge_by_gap()</summary>
Merge (in-place) any pair of adjacent segments if the gap between them <= ``min_gap``.
Parameters
----------
min_gap : float, default 0.1
Minimum second(s) allow between two segment.
max_words : int, optional
Maximum number of words allowed in each segment.
max_chars : int, optional
Maximum number of characters allowed in each segment.
is_sum_max : bool, default False
Whether ``max_words`` and ``max_chars`` is applied to the merged segment instead of the individual segments
to be merged.
lock : bool, default False
Whether to prevent future splits/merges from altering changes made by this method.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
<details>
<summary>merge_by_punctuation()</summary>
Merge (in-place) any two segments that has specific punctuations inbetween.
Parameters
----------
punctuation : list of str of list of tuple of (str, str) or str
Punctuation(s) to merge segments by.
max_words : int, optional
Maximum number of words allowed in each segment.
max_chars : int, optional
Maximum number of characters allowed in each segment.
is_sum_max : bool, default False
Whether ``max_words`` and ``max_chars`` is applied to the merged segment instead of the individual segments
to be merged.
lock : bool, default False
Whether to prevent future splits/merges from altering changes made by this method.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
<details>
<summary>merge_all_segments()</summary>
Merge all segments into one segment.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
<details>
<summary>clamp_max()</summary>
Clamp all word durations above certain value.
This is most effective when applied before and after other regroup operations.
Parameters
----------
medium_factor : float, default 2.5
Clamp durations above (``medium_factor`` * medium duration) per segment.
If ``medium_factor = None/0`` or segment has less than 3 words, it will be ignored and use only ``max_dur``.
max_dur : float, optional
Clamp durations above ``max_dur``.
clip_start : bool or None, default None
Whether to clamp the start of a word. If ``None``, clamp the start of first word and end of last word per
segment.
verbose : bool, default False
Whether to print out the timestamp changes.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
<details>
<summary>lock()</summary>
Lock words/segments with matching prefix/suffix to prevent splitting/merging.
Parameters
----------
startswith: str or list of str
Prefixes to lock.
endswith: str or list of str
Suffixes to lock.
right : bool, default True
Whether prevent splits/merges with the next word/segment.
left : bool, default False
Whether prevent splits/merges with the previous word/segment.
case_sensitive : bool, default False
Whether to match the case of the prefixes/suffixes with the words/segments.
strip : bool, default True
Whether to ignore spaces before and after both words/segments and prefixes/suffixes.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
### Editing
The editing methods in stable-ts can be chained with [Regrouping Methods](#regrouping-methods) and used in `regroup()`.
Remove specific instances words or segments:
```python
# Remove first word of the first segment:
first_word = result[0][0]
result.remove_word(first_word)
# This following is also does the same:
del result[0][0]
# Remove the last segment:
last_segment = result[-1]
result.remove_segment(last_segment)
# This following is also does the same:
del result[-1]
```
Docstrings:
<details>
<summary>remove_word()</summary>
Remove a word.
Parameters
----------
word : WordTiming or tuple of (int, int)
Instance of :class:`stable_whisper.result.WordTiming` or tuple of (segment index, word index).
reassign_ids : bool, default True
Whether to reassign segment and word ids (indices) after removing ``word``.
verbose : bool, default True
Whether to print detail of the removed word.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
<details>
<summary>remove_segment()</summary>
Remove a segment.
Parameters
----------
segment : Segment or int
Instance :class:`stable_whisper.result.Segment` or segment index.
reassign_ids : bool, default True
Whether to reassign segment IDs (indices) after removing ``segment``.
verbose : bool, default True
Whether to print detail of the removed word.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
Removing repetitions:
```python
# Example 1: "This is is is a test." -> "This is a test."
# The following removes the last two " is":
result.remove_repetition(1)
# Example 2: "This is is is a test this is a test." -> "This is a test."
# The following removes the second " is" and third " is", then remove the last "this is a test"
# The first parameter `max_words` is `4` because "this is a test" consists 4 words
result.remove_repetition(4)
```
Docstring:
<details>
<summary>remove_repetition()</summary>
Remove words that repeat consecutively.
Parameters
----------
max_words : int
Maximum number of words to look for consecutively.
case_sensitive : bool, default False
Whether the case of words need to match to be considered as repetition.
strip : bool, default True
Whether to ignore spaces before and after each word.
ignore_punctuations : bool, default '"',.?!'
Ending punctuations to ignore.
extend_duration: bool, default True
Whether to extend the duration of the previous word to cover the duration of the repetition.
verbose: bool, default True
Whether to print detail of the removed repetitions.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
Removing specific word(s) by string content:
```python
# Remove all " ok" from " ok ok this is a test."
result.remove_words_by_str('ok')
# Remove all " ok" and " Um..." from " ok this is a test. Um..."
result.remove_words_by_str(['ok', 'um'])
```
Docstring:
<details>
<summary>remove_words_by_str()</summary>
Remove words that match ``words``.
Parameters
----------
words : str or list of str or None
A word or list of words to remove.``None`` for all words to be passed into ``filters``.
case_sensitive : bool, default False
Whether the case of words need to match to be considered as repetition.
strip : bool, default True
Whether to ignore spaces before and after each word.
ignore_punctuations : bool, default '"',.?!'
Ending punctuations to ignore.
min_prob : float, optional
Acts as the first filter the for the words that match ``words``. Words with probability < ``min_prob`` will
be removed if ``filters`` is ``None``, else pass the words into ``filters``. Words without probability will
be treated as having probability < ``min_prob``.
filters : Callable, optional
A function that takes an instance of :class:`stable_whisper.result.WordTiming` as its only argument.
This function is custom filter for the words that match ``words`` and were not caught by ``min_prob``.
verbose:
Whether to print detail of the removed words.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
Filling in segment gaps:
```python
# result0: [" How are you?"] [" I'm good."] [" Good!"]
# result1: [" Hello!"] [" How are you?"] [" How about you?"] [" Good!"]
result0.fill_in_gaps(result1)
# After filling in the gaps in `result0` with contents in `result1`:
# result0: [" Hello!"] [" How are you?"] [" I'm good."] [" How about you?"] [" Good!"]
```
Docstring:
<details>
<summary>fill_in_gaps()</summary>
Fill in segment gaps larger than ``min_gap`` with content from ``other_result`` at the times of gaps.
Parameters
----------
other_result : WhisperResult or str
Another transcription result as an instance of :class:`stable_whisper.result.WhisperResult` or path to the
JSON of the result.
min_gap : float, default 0.1
The minimum seconds of a gap between segments that must be exceeded to be filled in.
case_sensitive : bool, default False
Whether to consider the case of the first and last word of the gap to determine overlapping words to remove
before filling in.
strip : bool, default True
Whether to ignore spaces before and after the first and last word of the gap to determine overlapping words
to remove before filling in.
ignore_punctuations : bool, default '"',.?!'
Ending punctuations to ignore in the first and last word of the gap to determine overlapping words to
remove before filling in.
verbose:
Whether to print detail of the filled content.
Returns
-------
stable_whisper.result.WhisperResult
The current instance after the changes.
</details>
### Locating Words
There are two ways to locate words.
The first way is by approximating time at which the words are spoken
then transcribing a few seconds around the approximated time.
This also the faster way for locating words.
```python
matches = model.locate('audio.mp3', 'are', language='en', count=0)
for match in matches:
print(match.to_display_str())
# verbose=True does the same thing as this for-loop.
```
Docstring:
<details>
<summary>locate()</summary>
Locate when specific words are spoken in ``audio`` without fully transcribing.
This is usefully for quickly finding at what time the specify words or phrases are spoken in an audio. Since it
does not need to transcribe the audio to approximate the time, it is significantly faster transcribing then
locating the word in the transcript.
It can also transcribe few seconds around the approximated time to find out what was said around those words or
confirm if the word was even spoken near that time.
Parameters
----------
model : whisper.model.Whisper
An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes
Path/URL to the audio file, the audio waveform, or bytes of audio file.
If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
text: str or list of int
Words/phrase or list of tokens to search for in ``audio``.
language : str
Language of the ``text``.
count : int, default 1, meaning stop search after 1 match
Number of matches to find. Use 0 to look for all.
duration_window : float or tuple of (float, float), default 3.0, same as (3.0, 3.0)
Seconds before and after the end timestamp approximations to transcribe after mode 1.
If tuple pair of values, then the 1st value will be seconds before the end and 2nd value will be seconds after.
mode : int, default 0
Mode of search.
2, Approximates the end timestamp of ``text`` in the audio. This mode does not confirm whether ``text`` is
spoken at the timestamp
1, Completes mode 2 then transcribes audio within ``duration_window`` to confirm whether `text` is a match at
the approximated timestamp by checking if ``text`` at that ``duration_window`` is within
``probability_threshold`` or matching the string content if ``text`` with the transcribed text at the
``duration_window``.
0, Completes mode 1 then add word timestamps to the transcriptions of each match.
Modes from fastest to slowest: 2, 1, 0
start : float, optional, meaning it starts from 0s
Seconds into the audio to start searching for ``text``.
end : float, optional
Seconds into the audio to stop searching for ``text``.
probability_threshold : float, default 0.5
Minimum probability of each token in ``text`` for it to be considered a match.
eots : int, default 1
Number of EOTs to reach before stopping transcription at mode 1. When transcription reach a EOT, it usually
means the end of the segment or audio. Once ``text`` is found in the ``duration_window``, the transcription
will stop immediately upon reaching a EOT.
max_token_per_seg : int, default 20
Maximum number of tokens to transcribe in the ``duration_window`` before stopping.
exact_token : bool, default False
Whether to find a match base on the exact tokens that make up ``text``.
case_sensitive : bool, default False
Whether to consider the case of ``text`` when matching in string content.
verbose : bool or None, default False
Whether to display the text being decoded to the console.
Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
initial_prompt : str, optional
Text to provide as a prompt for the first window. This can be used to provide, or
"prompt-engineer" a context for transcription, e.g. custom vocabularies or proper nouns
to make it more likely to predict those word correctly.
suppress_tokens : str or list of int, default '-1', meaning suppress special characters except common punctuations
List of tokens to suppress.
demucs : bool or torch.nn.Module, default False
Whether to preprocess ``audio`` with Demucs to isolate vocals / remove noise. Set ``demucs`` to an instance of
a Demucs model to avoid reloading the model for each run.
Demucs must be installed to use. Official repo, https://github.com/facebookresearch/demucs.
demucs_options : dict, optional
Options to use for :func:`stable_whisper.audio.demucs_audio`.
only_voice_freq : bool, default False
Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
Returns
-------
stable_whisper.result.Segment or list of dict or list of float
Mode 0, list of instances of :class:`stable_whisper.result.Segment`.
Mode 1, list of dictionaries with end timestamp approximation of matches and transcribed neighboring words.
Mode 2, list of timestamps in seconds for each end timestamp approximation.
Notes
-----
For ``text``, the case and spacing matters as 'on', ' on', ' On' are different tokens, therefore chose the one that
best suits the context (e.g. ' On' to look for it at the beginning of a sentence).
Use a sufficiently large first value of ``duration_window`` i.e. the value > time it is expected to speak ``text``.
If ``exact_token = False`` and the string content matches, then ``probability_threshold`` is not used.
Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> matches = model.locate('audio.mp3', 'are', 'English', verbose=True)
Some words can sound the same but have different spellings to increase of the chance of finding such words use
``initial_prompt``.
>>> matches = model.locate('audio.mp3', ' Nickie', 'English', verbose=True, initial_prompt='Nickie')
</details>
<details>
<summary>CLI</summary>
```
stable-ts audio.mp3 --locate "are" --language en -to "count=0"
```
</details>
The second way allows you to locate words with regular expression,
but it requires the audio to be fully transcribed first.
```python
result = model.transcribe('audio.mp3')
# Find every sentence that contains "and"
matches = result.find(r'[^.]+and[^.]+\.')
# print the all matches if there are any
for match in matches:
print(f'match: {match.text_match}\n'
f'text: {match.text}\n'
f'start: {match.start}\n'
f'end: {match.end}\n')
# Find the word before and after "and" in the matches
matches = matches.find(r'\s\S+\sand\s\S+')
for match in matches:
print(f'match: {match.text_match}\n'
f'text: {match.text}\n'
f'start: {match.start}\n'
f'end: {match.end}\n')
```
Docstring:
<details>
<summary>find()</summary>
Find segments/words and timestamps with regular expression.
Parameters
----------
pattern : str
RegEx pattern to search for.
word_level : bool, default True
Whether to search at word-level.
flags : optional
RegEx flags.
Returns
-------
stable_whisper.result.WhisperResultMatches
An instance of :class:`stable_whisper.result.WhisperResultMatches` with word/segment that match ``pattern``.
</details>
### Silence Suppression
While the timestamps predicted by Whisper are generally accurate,
it sometimes predicts the start of a word way before the word is spoken
or the end of a word long after the word has been spoken.
This is where "silence suppression" helps. It is enabled by default (`suppress_silence=True`).
The idea is to adjust the timestamps based on the timestamps of non-speech portions of the audio.

*Note: In 1.X, "silence suppression" refers to the process of suppressing timestamp tokens of the silent portions during inference,
but changed to post-inference timestamp adjustments in 2.X, which allows stable-ts to be used with other ASR models.
The timestamp token suppression feature is disabled by default, but can still be enabled with `suppress_ts_tokens=True`.*
By default, stable-ts determines the non-speech timestamps based on
how loud a section of the audio is relative to the neighboring sections.
This method is most effective for cases, where the speech is significantly louder than the background noise.
The other method is to use [Silero VAD](https://github.com/snakers4/silero-vad) (enabled with `vad=True`).
To visualize the differences between non-VAD and VAD, see [Visualizing Suppression](#visualizing-suppression).
Besides the parameters for non-speech detection sensitivity (see [Visualizing Suppression](#visualizing-suppression)),
the following parameters are used to combat inaccurate non-speech detection.<br>
`min_word_dur` is the shortest duration each word is allowed from adjustments.<br>
`nonspeech_error` is the relative error of the non-speech that appears in between a word.<br>
`use_word_position` is whether to use word position in segment to determine whether to keep end or start timestamps
*Note: `nonspeech_error` was not available before 2.14.0; `use_word_position` was not available before 2.14.2;
`min_word_dur` prevented any adjustments that resulted in word duration shorter than `min_word_dur`.*
For the following example, `min_word_dur=0.5` (default: 0.1) and `nonspeech_error=0.3` (default: 0.3).

`nonspeech_error=0.3` allows each non-speech section to be treated 1.3 times their actual duration.
Either from the start of the corresponding word to the end of the non-speech
or from the start of the non-speech to the end of the corresponding word.
In the case that both conditions are met, the shorter one is used.
Or if both are equal, then the start of the non-speech to the end of the word is used.<br>
The second non-speech from 1.375s to 1.75s is ignored for 'world.' because it failed both conditions.<br>
The first word, 'Hello', satisfies only the former condition from 0s to 0.625, thus the new start for 'Hello'
would be 0.625s. However, `min_word_dur=0.5` requires the resultant duration to be at least 0.5s.
As a result, the start of 'Hello' is changed to 0.375s instead of 0.625s.
Furthermore, the default setting, `use_word_position=True`, also ensures the start is adjusted for the first word
and the end is adjusted for the last word of the segment as long as one of the conditions is true.
### Tips
- do not disable word timestamps with `word_timestamps=False` for reliable segment timestamps
- use `vad=True` for more accurate non-speech detection
- use `demucs=True` to isolate vocals with [Demucs](https://github.com/facebookresearch/demucs); it is also effective at isolating vocals even if there is no music
- use `demucs=True` and `vad=True` for music
- set same seed for each transcription (e.g. `random.seed(0)`) for `demucs=True` to produce deterministic outputs
- to enable dynamic quantization for inference on CPU use `--dq true` for CLI or `dq=True` for `stable_whisper.load_model`
- use `encode_video_comparison()` to encode multiple transcripts into one video for synced comparison; see [Encode Comparison](#encode-comparison)
- use `visualize_suppression()` to visualize the differences between non-VAD and VAD options; see [Visualizing Suppression](#visualizing-suppression)
- [refinement](#refinement) can an effective (but slow) alternative for polishing timestamps if silence suppression isn't effective
### Visualizing Suppression
You can visualize which parts of the audio will likely be suppressed (i.e. marked as silent).
Requires: [Pillow](https://github.com/python-pillow/Pillow) or [opencv-python](https://github.com/opencv/opencv-python).
#### Without VAD
```python
import stable_whisper
# regions on the waveform colored red are where it will likely be suppressed and marked as silent
# [q_levels]=20 and [k_size]=5 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', q_levels=20, k_size = 5)
```

#### With [Silero VAD](https://github.com/snakers4/silero-vad)
```python
# [vad_threshold]=0.35 (default)
stable_whisper.visualize_suppression('audio.mp3', 'image.png', vad=True, vad_threshold=0.35)
```

Docstring:
<details>
<summary>visualize_suppression()</summary>
Visualize regions on the waveform of ``audio`` detected as silent.
Regions on the waveform colored red are detected as silent.
Parameters
----------
audio : str or numpy.ndarray or torch.Tensor or bytes
Path/URL to the audio file, the audio waveform, or bytes of audio file.
If audio is ``numpy.ndarray`` or ``torch.Tensor``, the audio must be already at sampled to 16kHz.
output : str, default None, meaning image will be shown directly via Pillow or opencv-python
Path to save visualization.
q_levels : int, default 20
Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
Acts as a threshold to marking sound as silent.
Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
Recommend 5 or 3; higher sizes will reduce detection of silence.
vad : bool, default False
Whether to use Silero VAD to generate timestamp suppression mask.
Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
max_width : int, default 1500
Maximum width of visualization to avoid overly large image from long audio.
Each unit of pixel is equivalent to 1 token. Use -1 to visualize the entire audio track.
height : int, default 200
Height of visualization.
</details>
### Encode Comparison
You can encode videos similar to the ones in the doc for comparing transcriptions of the same audio.
```python
stable_whisper.encode_video_comparison(
'audio.mp3',
['audio_sub1.srt', 'audio_sub2.srt'],
output_videopath='audio.mp4',
labels=['Example 1', 'Example 2']
)
```
Docstring:
<details>
<summary>encode_video_comparison()</summary>
Encode multiple subtitle files into one video with the subtitles vertically stacked.
Parameters
----------
audiofile : str
Path of audio file.
subtitle_files : list of str
List of paths for subtitle file.
output_videopath : str, optional
Output video path.
labels : list of str, default, None, meaning use ``subtitle_files`` as labels
List of labels for ``subtitle_files``.
height : int, default 90
Height for each subtitle section.
width : int, default 720
Width for each subtitle section.
color : str, default 'black'
Background color of the video.
fontsize: int, default 70
Font size for subtitles.
border_color : str, default 'white'
Border color for separating the sections of subtitle.
label_color : str, default 'white'
Color of labels.
label_size : int, default 14
Font size of labels.
fps : int, default 25
Frame-rate of the video.
video_codec : str, optional
Video codec opf the video.
audio_codec : str, optional
Audio codec opf the video.
overwrite : bool, default False
Whether to overwrite existing video files with the same path as the output video.
only_cmd : bool, default False
Whether to skip encoding and only return the full command generate from the specified options.
verbose : bool, default True
Whether to display ffmpeg processing info.
Returns
-------
str or None
Encoding command as a string if ``only_cmd = True``.
</details>
#### Multiple Files with CLI
Transcribe multiple audio files then process the results directly into SRT files.
```commandline
stable-ts audio1.mp3 audio2.mp3 audio3.mp3 -o audio1.srt audio2.srt audio3.srt
```
### Any ASR
You can use most of the features of Stable-ts improve the results of any ASR model/APIs.
[Just follow this notebook](https://github.com/jianfch/stable-ts/blob/main/examples/non-whisper.ipynb).
## Quick 1.X → 2.X Guide
### What's new in 2.0.0?
- updated to use Whisper's more reliable word-level timestamps method.
- the more reliable word timestamps allow regrouping all words into segments with more natural boundaries.
- can now suppress silence with [Silero VAD](https://github.com/snakers4/silero-vad) (requires PyTorch 1.12.0+)
- non-VAD silence suppression is also more robust
### Usage changes
- `results_to_sentence_srt(result, 'audio.srt')` → `result.to_srt_vtt('audio.srt', word_level=False)`
- `results_to_word_srt(result, 'audio.srt')` → `result.to_srt_vtt('output.srt', segment_level=False)`
- `results_to_sentence_word_ass(result, 'audio.srt')` → `result.to_ass('output.ass')`
- there's no need to stabilize segments after inference because they're already stabilized during inference
- `transcribe()` returns a `WhisperResult` object which can be converted to `dict` with `.to_dict()`. e.g `result.to_dict()`
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details
## Acknowledgments
Includes slight modification of the original work: [Whisper](https://github.com/openai/whisper)
|
AzzamRadman/dqn-SpaceInvadersNoFrameskip-v4
|
AzzamRadman
| 2024-01-13T16:54:37Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T16:54:02Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 670.00 +/- 149.42
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AzzamRadman -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AzzamRadman -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AzzamRadman
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
shobhit18/Taxi-v3
|
shobhit18
| 2024-01-13T16:53:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T16:52:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="shobhit18/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gflexx/q-FrozenLake-v1-4x4-noSlippery
|
gflexx
| 2024-01-13T16:47:34Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T16:47:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gflexx/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wcyat/whisper-small-yue-5
|
wcyat
| 2024-01-13T16:44:42Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:wcyat/whisper-small-yue",
"base_model:finetune:wcyat/whisper-small-yue",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-13T11:40:52Z |
---
base_model: wcyat/whisper-small-yue
tags:
- generated_from_trainer
model-index:
- name: whisper-small-yue-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-yue-5
This model is a fine-tuned version of [wcyat/whisper-small-yue](https://huggingface.co/wcyat/whisper-small-yue) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3094
- Cer: 11.2185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0079 | 2.83 | 1000 | 0.2938 | 11.8706 |
| 0.0005 | 5.67 | 2000 | 0.3094 | 11.2185 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sumangpt/adapter_1
|
sumangpt
| 2024-01-13T16:26:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2024-01-13T16:26:23Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
hxxris/haaris-transformer-optimizer
|
hxxris
| 2024-01-13T16:23:45Z | 146 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-13T15:40:24Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: haaris-transformer-optimizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-transformer-optimizer
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6422
- Accuracy: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.73 | 2 | 2.6422 | 0.0708 |
| No log | 1.45 | 4 | 2.6422 | 0.0708 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
amayprro552/customModelsFID
|
amayprro552
| 2024-01-13T16:17:19Z | 1 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-13T15:52:57Z |
---
license: creativeml-openrail-m
---
<b>This model is available on <a href="https://www.mage.space/">Mage.Space</a> (main sponsor)</b><br>
<b>Please read this!</b><br>
This is not yet the full version of the model (read the <b>"Model Description"</b> section).<br>
For version 6.0 it is recommended to use with VAE (to improve generation quality and get rid of artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br>
<b>Model Description</b><br>
Realistic Vision V6.0 "New Vision" is a global update for the Realistic Vision model, which will be released gradually in several beta versions until the full release. The model is aimed at realism and photorealism.<br>
CivitAI Page: https://civitai.com/models/4201/realistic-vision-v60-b1?modelVersionId=245598
<b>Resolutions (use lower resolution if you get a lot of mutations and stuff like that)</b><br>
- Face Portrait: 896x896<br>
- Portrait: 896x896, 768x1024<br>
- Half Body: 768x1024, 640x1152<br>
- Full Body: 896x896, 768x1024, 640x1152, 1024x768, 1152x640<br>
<b>Improvements</b>
- increased generation resolution to such resolutions as: 896x896, 768x1024, 640x1152, 1024x768, 1152x640. (note. in some cases there may still be mutations, duplications, etc -> will be fixed in future versions).<br>
- improved sfw and nsfw for female and female anatomy (note. not all poses work correctly in such large resolutions -> will be fixed in future versions).<br>
<b>Recommended Workflow</b><br>
Images can be generated with or without Hires.Fix, but it will help improve the generation quality significantly. In some cases it is strictly recommended to use Hires.Fix, namely when generating full body and half body images (note: you can also use Restore Faces or ADetailer).<br>
<b>Recommended Generation Parameters</b><br>
Sampler: DPM++ SDE Karras (25+ steps) / DPM++ 2M SDE (50+ steps)<br>
Negative Prompt: (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br>
<b>Recommended Hires.Fix Parameters</b><br>
Sampler: DPM++ SDE Karras or DPM++ 2M SDE<br>
Denoising steps: 10+ (DPM++ SDE Karras) / 20+ (DPM++ 2M SDE (notice. the lower the value of hires steps at a given sampler, the stronger the skin texture and the higher the chance of getting artifacts))<br>
Denoising strength: 0.1-0.3<br>
Upscaler: 4x-UltraSharp / 4x_NMKD-Superscale-SP_178000_G or another<br>
Upscale by: 1.1-2.0+<br>
|
shobhit18/ppo-LunarLander-v2
|
shobhit18
| 2024-01-13T16:05:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T19:14:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.78 +/- 17.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
snewcomer/phi-2-finetuned
|
snewcomer
| 2024-01-13T16:03:59Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-10T15:47:27Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 600
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
HassanSamo/Mistral7b-instruc-v2-python
|
HassanSamo
| 2024-01-13T15:54:04Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:HassanSamo/Python-Q_A",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-10T13:13:48Z |
---
license: apache-2.0
datasets:
- HassanSamo/Python-Q_A
language:
- en
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amayprro552/customFaID
|
amayprro552
| 2024-01-13T15:49:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-13T15:46:27Z |
---
license: creativeml-openrail-m
---
<b>This model is available on <a href="https://www.mage.space/">Mage.Space</a> (main sponsor)</b><br>
<b>Please read this!</b><br>
This is not yet the full version of the model (read the <b>"Model Description"</b> section).<br>
For version 6.0 it is recommended to use with VAE (to improve generation quality and get rid of artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br>
<b>Model Description</b><br>
Realistic Vision V6.0 "New Vision" is a global update for the Realistic Vision model, which will be released gradually in several beta versions until the full release. The model is aimed at realism and photorealism.<br>
CivitAI Page: https://civitai.com/models/4201/realistic-vision-v60-b1?modelVersionId=245598
<b>Resolutions (use lower resolution if you get a lot of mutations and stuff like that)</b><br>
- Face Portrait: 896x896<br>
- Portrait: 896x896, 768x1024<br>
- Half Body: 768x1024, 640x1152<br>
- Full Body: 896x896, 768x1024, 640x1152, 1024x768, 1152x640<br>
<b>Improvements</b>
- increased generation resolution to such resolutions as: 896x896, 768x1024, 640x1152, 1024x768, 1152x640. (note. in some cases there may still be mutations, duplications, etc -> will be fixed in future versions).<br>
- improved sfw and nsfw for female and female anatomy (note. not all poses work correctly in such large resolutions -> will be fixed in future versions).<br>
<b>Recommended Workflow</b><br>
Images can be generated with or without Hires.Fix, but it will help improve the generation quality significantly. In some cases it is strictly recommended to use Hires.Fix, namely when generating full body and half body images (note: you can also use Restore Faces or ADetailer).<br>
<b>Recommended Generation Parameters</b><br>
Sampler: DPM++ SDE Karras (25+ steps) / DPM++ 2M SDE (50+ steps)<br>
Negative Prompt: (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br>
<b>Recommended Hires.Fix Parameters</b><br>
Sampler: DPM++ SDE Karras or DPM++ 2M SDE<br>
Denoising steps: 10+ (DPM++ SDE Karras) / 20+ (DPM++ 2M SDE (notice. the lower the value of hires steps at a given sampler, the stronger the skin texture and the higher the chance of getting artifacts))<br>
Denoising strength: 0.1-0.3<br>
Upscaler: 4x-UltraSharp / 4x_NMKD-Superscale-SP_178000_G or another<br>
Upscale by: 1.1-2.0+<br>
|
ambrosfitz/tinyllama-history-chat-v1.1
|
ambrosfitz
| 2024-01-13T15:48:15Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T20:03:29Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
We took the Tinyllama formate and fine-tuned the model with a history focus.
### Model Description
This model was fined tuned using a dataset based on the opensource textbooks of the American Yawp and the OpenStax US History. Questions and answer pairs from the dataset were
generated using Claude.ai and ChatGPT 3.5.
- **Developed by:** ambrosfitz
- **Model type:** llama
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model:** Tinyllama
## Uses
The purpose of this model is to facilitate a more fine-tuned and specific model for questions on history. Further version will focus on opensource history journals,
primarly from an American History persepective.
|
milaidy/seitard
|
milaidy
| 2024-01-13T15:34:12Z | 6 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-13T15:28:30Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### seitard Dreambooth model trained by milaidy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
sumangpt/adapter
|
sumangpt
| 2024-01-13T15:29:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"custom_code",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2024-01-13T14:38:34Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
amayprro552/beautcz
|
amayprro552
| 2024-01-13T15:22:31Z | 0 | 0 | null |
[
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-12T21:48:07Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
abragin/ppo-Huggy
|
abragin
| 2024-01-13T15:21:18Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-13T15:21:06Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: abragin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HydraRahul/xyr-sundar-pichai
|
HydraRahul
| 2024-01-13T15:17:13Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-13T15:13:19Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### xyr-sundar-pichai Dreambooth model trained by HydraRahul following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 60448
Sample pictures of this concept:

|
DazMashaly/swin_cont
|
DazMashaly
| 2024-01-13T15:14:30Z | 169 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-13T15:14:07Z |
---
tags:
- image-classification
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: swin_cont
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin_cont
This model was trained from scratch on the zindi dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4766
- eval_accuracy: 0.7545
- eval_runtime: 236.8539
- eval_samples_per_second: 16.352
- eval_steps_per_second: 0.515
- epoch: 2.0
- step: 347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
zaq-hack/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-bpw364-h6-exl2
|
zaq-hack
| 2024-01-13T15:00:43Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T05:55:04Z |
---
license: cc-by-nc-4.0
---
EXL2 @ 3.64bpw</br>
This format ingests mixtral prompts more quickly.</br>
This bpw fits nicely into a 24G videocard.</br>
All credit to the original creators: Noromaid is hot.

---
# Disclaimer:
## This model is experimental, do not expect everything to work.
This model uses the Chatml **prompting format**
---
Beeg noromaid on ***steroids***. Suitable for RP, ERP.
This model was trained on the Zloss fork of Charles, and should fix issue the model had.
Use Chatml prompt format, but not the special token.
The reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available [HERE](https://github.com/DocShotgun/LLM-notebooks/blob/main/weighted-lora-merge.ipynb) to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter.
## Credits:
- Undi
- IkariDev
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
### Prompt format: Chatml
```
<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{input}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
```
## Datasets used:
- Aesir 1, 2 & 3 modified by us, credit to ([MinervaAI](https://huggingface.co/MinervaAI) / [Gryphe](https://huggingface.co/Gryphe))
- [LimaRP-20231109](https://huggingface.co/datasets/lemonilia/LimaRP) ([Lemonilia](https://huggingface.co/lemonilia))
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal) ([NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)
- [No-robots-ShareGPT](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) ([Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun))
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
notaryanramani/my_awesome_billsum_model
|
notaryanramani
| 2024-01-13T14:34:40Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-13T14:29:12Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5045
- Rouge1: 0.1425
- Rouge2: 0.0544
- Rougel: 0.119
- Rougelsum: 0.119
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7998 | 0.1292 | 0.0372 | 0.1084 | 0.1089 | 19.0 |
| No log | 2.0 | 124 | 2.5835 | 0.1368 | 0.0492 | 0.1152 | 0.1151 | 19.0 |
| No log | 3.0 | 186 | 2.5213 | 0.143 | 0.0552 | 0.1198 | 0.1198 | 19.0 |
| No log | 4.0 | 248 | 2.5045 | 0.1425 | 0.0544 | 0.119 | 0.119 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TeeZee/Fimbulvetr-10.7B-v1-bpw8.0-h8-exl2
|
TeeZee
| 2024-01-13T14:31:20Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T11:56:47Z |
---
license: cc-by-nc-4.0
language:
- en
---
## **Fimbulvetr-10.7B-v1**
[exllamav2](https://github.com/turboderp/exllamav2) quant for [Sao10K/Fimbulvetr-10.7B-v1Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
Runs smoothly on single 3090 in webui with context length set to 4096, ExLlamav2_HF loader
and cache_8bit=True
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
|
imagepipeline/EpiCRealism-Pure-Evo-v5
|
imagepipeline
| 2024-01-13T14:26:16Z | 47 | 1 |
diffusers
|
[
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-13T14:24:51Z |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## EpiCRealism-Pure-Evo-v5
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/530ba359-f680-4704-aeb1-3fefb2cd632d/width=450/02503-1337.jpeg" alt="Generated by Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - How to use? Prompt: simple explanation of the image (try first without extra keywords) Negative: cartoon, painting, illustration, (worst quality, low quality, normal quality:2) Steps: >20 (if image has errors or artefacts use higher Steps) CFG Scale: 5 (higher config scale can lose realism, depends on prompt, sampler and Steps)Sampler: Any Sampler (SDE, DPM-Sampler will result in more realism) Size: 512x768 or 768x512
[](https://imagepipeline.io/models/EpiCRealism-Pure-Evo-v5?id=0b397287-3449-4801-9a86-536465f6189f/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "0b397287-3449-4801-9a86-536465f6189f",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at hello@imagepipeline.io
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
LarryAIDraw/ImoutoSaeIrebaIi_ShirakawaMiyako
|
LarryAIDraw
| 2024-01-13T14:22:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:18:53Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/258707/shirakawa-miyako-or-imouto-sae-ireba-ii
|
LarryAIDraw/FatinaV1_1
|
LarryAIDraw
| 2024-01-13T14:22:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:18:22Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/240699/fatina-or-tower-of-druaga
|
LarryAIDraw/smikazuki-nvwls-v1
|
LarryAIDraw
| 2024-01-13T14:21:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:17:34Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/256302/shizuka-mikazuki-zom-100-bucket-list-of-the-dead-lora
|
gywy/mamba-115M-chinese
|
gywy
| 2024-01-13T14:18:17Z | 123 | 9 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2024-01-08T08:42:16Z |
# 基于Mamba架构建立的基于中文高质量语料训练的nano-LLM,目测性能还是非常棒的,超出预期。
## 该模型是pretrain 模型,没有经过SFT
如果你喜欢我们的模型,帮我们点个小心心吧
## 使用方法:
```
python benchmark_generation_mamba_simple.py --model-name ./save_pretrain/ --prompt "写一篇关于中国经济腾飞的文章,要涉及方方面面。 中国的历史源远流长," --topp 0.9 --temperature 1.5 --repetition-penalty 1.6 --genlen 1000
```
写一篇关于中国经济腾飞的文章,要涉及方方面面。 中国的历史源远流长,从远古到近代都有发展变化;中国历史上也出现过许多朝代和王朝(如夏、商),但都因政治原因而中断或衰落了。 中国历史的发展是长期的历史过程:先秦时期是一个大一统的封建国家时代,秦汉以后则进入封建社会阶段——由小到大分盛衰交替的时期。
在春秋战国以前的中国历史中,虽然也有一些小的分裂割据政权存在过一段时间,但是总体来说还是比较稳定的局面。春秋末期至战国初期是中国历史上的一个重要转折期,也是中国社会经济发生重大变革的一个关键转折点。《左传》中记载:“晋楚争霸”的故事就是发生在这一时期。“晋国霸权崛起”,标志着“大国称霸的时代已经到来”。公元前623年,“晋文公重耳回国即位为国君后,‘尊王攘夷’的口号就应运而生。”这一口号成为后世人们普遍认同的一种思想观念——“天下一家”“一盘散沙论”;同时它也为后来历代统治者所继承和发展并加以发扬光大。
随着社会生产力的发展和商品经济的发展,《周礼》《礼记·礼器篇》、《论语》,以及《孟子》、荀子等儒家经典著作中的某些篇章被编纂成书——《吕氏春秋》。这些书对当时的社会生活产生了深远的影响:《管子》“八观”、“九守”,《韩非子•五蠹》,“慎行修德”,“重赏之下必有勇夫”(《说山》),等等都是其中的典型代表作品;《淮南子》,《淮南鸿烈》(又称《文子》)一书更是将道家学说与儒家的道德规范融为一体;“黄老之学”(即老子和老子的学术)则是汉初以来经汉武帝大力提倡之后逐渐形成的一种新思潮和新理论体系......所有这些都对后来的明清两代具有重要的影响作用。
中国古代社会的阶级结构经历了漫长的演变历程才最终形成于公元1945年的抗日战争胜利前夕。在这一过程中,由于统治阶级内部矛盾尖锐化和社会动荡不安的局面不断加剧,导致整个社会出现了贫富两极分化严重的情况。为了解决这个难题就必须进行改革创新以适应新的形势要求。于是,经过长期的探索实践积累而成的法家思想和儒学便逐步形成了自己的一套治国安邦的理论原则和方法框架系统。这种理论与方法系统的建立及其运用使得中国传统社会中各个阶层的人们都能自觉地遵守这套理论和方法的约束条件从而能够有效地维护自身的利益不受侵害甚至受到损害。因此可以说,在中国古代社会里存在着两种不同的价值取向和价值标准,它们构成了中华民族传统价值观的基本内容之一。这两种价值的内涵不同决定了它们在现代社会中所发挥的作用是不一样的。例如,在西方现代性批判理论中强调个人权利和个人自由的价值主张在现代西方文化中得到了广泛传播并被广泛地接受和使用着,但是在社会主义现代化建设事业当中却遭到了人们的抵制乃至否定甚至是敌视的态度。——这就是我们所说的中国特色社会主义的基本特征之所在!
那么什么是马克思主义?马克思认为马克思主义的产生与发展是人类社会发展的最高成就和最辉煌的阶段标志,其核心就在于它的科学性和革命性的高度统一性与一致性。恩格斯曾指出:“‘共产主义’,‘唯物主义史观的创始人’(指马克思和恩格斯的原话)——这是一切伟大的哲学思想的起点。”(《马克思恩格斯选集》)。毛泽东同志也曾说过这样的话:“......马克思列宁主义不是别的什么而是唯物主义的继续。”“只有把马列主义和我们的实际结合起来才能真正实现共产主义者所提出的全部任务。”(摘自中共中央文献研究室编辑出版的《〈共产党宣言〉序言》。《马克思恩格斯全集》第8卷上册[M].北京:人民出版社,1970年版。)由此可见,马克思的上述论断不仅是对人类认识发展的客观规律的认识的正确概括而且对于人类社会进步和人类命运有着极其重大的意义。
然而另一方面我们也必须看到,尽管中国共产党人提出了这样的观点并且进行了正确的分析论证并在实践中取得了成功,但是他们并没有从根本上改变人们对资本主义制度本质的理解和对资本主义的评价方式问题上的态度。他们仍然坚持用阶级斗争的观点来看待社会经济现象并对之进行分析判断并用这种方法来指导生产经营活动去完成既定的目标。所以他们在理论上还未能跳出传统的思维模式束缚而不能正确地理解现实的经济运行状况和经济运行的各种因素之间的关系及相互联系的关系而不是简单地按照某种特定的模式来进行经济活动决策和行动的结果。这就导致了他们对经济发展规律的把握和理解上存在的片面性或局限性。正因为如此,他们的错误之处也就不可避免地暴露了出来并最终受到了历史的惩罚。(参见李丹霞主编.《邓小平文选》[S].武汉出版社,1</s>[Z],上海三联书店,上海人民出版社,2001.)
总之,改革开放以来的我国经济学界一直处于十分混乱的状态之中,这种状况一直持续到了今天才得以恢复正常状态并为人们所承认和支持。这主要表现在以下几个方面:(1)“文化大革命”期间我国的经济学家们纷纷离开了学术界从事学术研究工作;(2)(三中全会公报);(3)《人民日报》; (四)、(六)、《红旗飘飘报》(《光明日报》编辑部)。其中有些文章发表的时间较晚且时间较长,有的则在文章中直接引用了毛主席的话作为自己的观点论据或者根据自己个人的经验教训得出了错误的结论。当然也有一些作者只是发表了部分
## 下一个示例
```
python benchmark_generation_mamba_simple.py --model-name ./save_pretrain/ --prompt "写一篇关于中国经济腾飞的文章,要涉及方方面面。 \n 中国在80年代开始进行改革开放," --topp 0.9 --temperature 1.2 --repetition-penalty 1.2 --genlen 700 --promptlen 200
```
写一篇关于中国经济腾飞的文章,要涉及方方面面。
中国在80年代开始进行改革开放, 1978 年以后, 中国经济迅速发展, 人民生活水平不断提高, 国家财政收入也逐步增加, 但是仍然存在一些问题: 第一, 国有企业改革没有跟上, 国有企业的规模不断扩大; 第二, 企业经营不善, 亏损严重, 职工工资增长缓慢, 职工福利待遇下降, 职工住房紧张, 职工教育经费不足, 等等。
中国经济的快速发展, 需要大量资金投入, 而这些资金又来自国外, 因此, 中国的经济发展离不开外国资本的帮助和支持。 20 世纪 60 年代后期, 随着中国经济的恢复和发展, 国际资本蜂拥而入, 特别是美国、 日本等国的资本纷纷涌入中国, 使得中国成为世界最大的外商投资国之一。 与此同时, 由于中国加入世贸组织后, 许多发展中国家纷纷要求加入, 于是, 很多发展中国家的政府便提出加入中国市场的问题。 当时, 中国的经济实力和国际地位都远远高于发达国家, 但由于中国缺乏外汇储备, 因而在加入过程中遇到了很大的困难。 同时, 在中国加入的过程中, 许多国家的银行和企业不愿意接受中国政府的贷款, 从而导致中国出现严重的通货膨胀。 为此, 当时的中国政府决定采取一系列措施来缓解这一问题。 首先, 中国人民银行于 1974 年 3 月 1 日正式成立, 并成立了中国人民银行总行, 负责管理全国性的商业银行业务。 其次, 中国人民银行还积极参与国际金融市场的运作, 包括建立国际清算银行( International Bank of International Settlements) , 以及为各国中央银行提供咨询服务等。 最后, 中国人民银行还积极推动人民币国际化进程, 以便尽快融入世界经济体系之中。
但是, 尽管中国加入世界贸易组织是件好事, 但中国加入世界贸易组织的步伐却非常慢。 这是因为, 中国加入世界贸易组织之后, 不仅国内的经济形势发生了很大变化, 而且整个国民经济的发展速度也在加快。 此外, 随着中国加入世界贸易组织, 我国与发达国家的贸易往来日益增多, 我国在国际上的影响力越来越大, 这给中国带来了巨大的压力。 面对这种压力, 邓小平同志提出了“ 三个代表”重要思想, 即把马克思主义基本原理同我国的实际结合起来, 把科学社会主义基本原则同我国的具体实践相结合, 使我们的理论更加丰富, 更好地指导我们的事业。 邓小平同志的这些讲话, 是我国改革开放以来, 对国内外各种思潮的总结, 为进一步推进改革开放提供了重要的思想和理论依据。
二、 改革开放以来的中国宏观经济
改革开放以来, 中国经济增长的速度明显快于其他发展中国家, 并且呈现出明显的上升趋势。 一方面, 随着经济的发展, 人们的消费需求越来越旺盛, 人们的生活水平也越来越高, 这就促使人们对物质文明和精神文明的追求程度大大提高, 从而使人们的精神面貌发生巨大变化。 另一方面, 随着中国经济的快速发展和对外开放程度的加深, 中国的经济总量在世界经济中的比重不断增加, 同时也引起了国际社会对中国经济发展的关注。 随着中国经济的持续高速发展, 中国的经济总量已经超过了世界上任何一个国家, 成为仅次于美国的全球第三大经济体。 与此同时, 中国的经济总量也已经超过日本, 成为世界第二大经济体。 可以说, 改革开放以来, 中国已经成为了
|
LarryAIDraw/Sword_Maiden__Goblin_Slayer_
|
LarryAIDraw
| 2024-01-13T14:14:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:04:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/263294/sword-maiden-goblin-slayer
|
LarryAIDraw/Yorha_HolyService2B-DEF
|
LarryAIDraw
| 2024-01-13T14:13:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-13T14:03:08Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/238120?modelVersionId=296845
|
MaziyarPanahi/komt-mistral-7b-v1-GPTQ
|
MaziyarPanahi
| 2024-01-13T14:07:27Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:davidkim205/komt-mistral-7b-v1",
"base_model:finetune:davidkim205/komt-mistral-7b-v1",
"license:apache-2.0"
] |
text-generation
| 2024-01-13T14:05:30Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- mistral
- text-generation
- finetuned
- en
- ko
- arxiv:2308.06502
- arxiv:2308.06259
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: komt-mistral-7b-v1-GPTQ
base_model: davidkim205/komt-mistral-7b-v1
inference: false
model_creator: davidkim205
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/komt-mistral-7b-v1-GPTQ](https://huggingface.co/MaziyarPanahi/komt-mistral-7b-v1-GPTQ) is a quantized (GPTQ) version of [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/komt-mistral-7b-v1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
rhndeveloper/orca-2-7B-v01-fine-tuned-using-ludwig-4bit
|
rhndeveloper
| 2024-01-13T14:04:46Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Orca-2-7b",
"base_model:adapter:microsoft/Orca-2-7b",
"region:us"
] | null | 2024-01-13T14:04:40Z |
---
library_name: peft
base_model: microsoft/Orca-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
akrisn/q-FrozenLake-v1-4x4-noSlippery
|
akrisn
| 2024-01-13T13:53:01Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T13:52:40Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="akrisn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
gagan3012/Multirial
|
gagan3012
| 2024-01-13T13:52:19Z | 1,385 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"ar",
"en",
"fr",
"es",
"de",
"hi",
"id",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T22:32:13Z |
---
license: apache-2.0
tags:
- moe
- mixtral
language:
- ar
- en
- fr
- es
- de
- hi
- id
- zh
---
# Multirial
MultiRial is the first ever multilingual Mixture of experts model.
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [azale-ai/Starstreak-7b-beta](https://huggingface.co/azale-ai/Starstreak-7b-beta)
* [gagan3012/Mistral_arabic_dpo](https://huggingface.co/gagan3012/Mistral_arabic_dpo)
* [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1)
* [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1)
* [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1)
* [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gagan3012/Multirial"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Razafaheem/finetuning-sentiment-model-ophelia-4
|
Razafaheem
| 2024-01-13T13:51:47Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-13T13:05:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-ophelia-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-ophelia-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4582
- Accuracy: 0.9149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
A2H0H0R1/Qwen-7B-Chat-Int4-Qlora-rice-new-long
|
A2H0H0R1
| 2024-01-13T13:45:43Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-7B-Chat-Int4",
"base_model:adapter:Qwen/Qwen-7B-Chat-Int4",
"region:us"
] | null | 2024-01-13T13:29:32Z |
---
library_name: peft
base_model: Qwen/Qwen-7B-Chat-Int4
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Mayem8630/Mayem
|
Mayem8630
| 2024-01-13T13:30:22Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"climate",
"legal",
"biology",
"image-to-text",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"license:creativeml-openrail-m",
"region:us"
] |
image-to-text
| 2024-01-13T13:27:54Z |
---
license: creativeml-openrail-m
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: image-to-text
tags:
- climate
- legal
- biology
---
|
JDB03/PPO-SnowballTarget
|
JDB03
| 2024-01-13T13:27:05Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-13T13:25:51Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JDB03/PPO-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Dhanang/kategori_aspek_model
|
Dhanang
| 2024-01-13T13:26:35Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-13T11:57:40Z |
---
license: mit
base_model: indobenchmark/indobert-base-p2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: kategori_aspek_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kategori_aspek_model
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5731
- Accuracy: 0.7532
- F1: 0.7342
- Precision: 0.6791
- Recall: 0.8234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6662 | 1.0 | 1816 | 0.6854 | 0.7449 | 0.7139 | 0.6657 | 0.7857 |
| 0.4846 | 2.0 | 3632 | 0.5731 | 0.7532 | 0.7342 | 0.6791 | 0.8234 |
| 0.3135 | 3.0 | 5448 | 0.6906 | 0.7667 | 0.7431 | 0.7017 | 0.7994 |
| 0.2189 | 4.0 | 7264 | 0.8181 | 0.7755 | 0.7387 | 0.7065 | 0.7994 |
| 0.152 | 5.0 | 9080 | 0.9838 | 0.7893 | 0.7486 | 0.7290 | 0.7799 |
| 0.0938 | 6.0 | 10896 | 1.0601 | 0.7826 | 0.7598 | 0.7314 | 0.7957 |
| 0.0629 | 7.0 | 12712 | 1.3297 | 0.7868 | 0.7665 | 0.7673 | 0.7684 |
| 0.0423 | 8.0 | 14528 | 1.3356 | 0.7906 | 0.7639 | 0.7477 | 0.7875 |
| 0.0178 | 9.0 | 16344 | 1.5868 | 0.7887 | 0.7625 | 0.7656 | 0.7638 |
| 0.008 | 10.0 | 18160 | 1.5453 | 0.7928 | 0.7650 | 0.7621 | 0.7709 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AfnanTS/Mutilingual-DBpediaArabic
|
AfnanTS
| 2024-01-13T13:20:02Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-12T16:34:11Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Mutilingual-DBpediaArabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mutilingual-DBpediaArabic
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
TheBloke/Helion-4x34B-GPTQ
|
TheBloke
| 2024-01-13T13:12:24Z | 9 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"base_model:Weyaxi/Helion-4x34B",
"base_model:quantized:Weyaxi/Helion-4x34B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-13T05:24:38Z |
---
base_model: Weyaxi/Helion-4x34B
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Helion 4X34B
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- yi
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Helion 4X34B - GPTQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Helion 4X34B](https://huggingface.co/Weyaxi/Helion-4x34B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Yağız Çalık's Helion 4X34B](https://huggingface.co/Weyaxi/Helion-4x34B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Helion-4x34B-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Helion-4x34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 9.96 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 9.93 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 44.21 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 46.28 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Helion-4x34B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Helion-4x34B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Helion-4x34B-GPTQ`:
```shell
mkdir Helion-4x34B-GPTQ
huggingface-cli download TheBloke/Helion-4x34B-GPTQ --local-dir Helion-4x34B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Helion-4x34B-GPTQ
huggingface-cli download TheBloke/Helion-4x34B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Helion-4x34B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Helion-4x34B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Helion-4x34B-GPTQ --local-dir Helion-4x34B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Helion-4x34B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Helion-4x34B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Helion-4x34B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Helion-4x34B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Helion-4x34B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Helion-4x34B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's Helion 4X34B

# Helion-4x34B
This is the model for Helion-4x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, you can utilize prompt templates provided by bagel and other expert's prompt templates.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Human Asistant
```
Human: {user}
### Assistant: {asistant}
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
negative_prompts: ["math", "reason", "mathematics", "solve", "count", "code", "python", "javascript", "programming", "algorithm"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
- source_model: SUS-Chat-34B
positive_prompts: ["math", "reason", "mathematics", "solve", "count", "assistant"]
- source_model: platypus-yi-34b
positive_prompts: [""]
negative_prompts: ["math", "reason", "mathematics", "solve", "count"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Helion-4x34B-GPTQ](https://huggingface.co/TheBloke/Helion-4x34B-GPTQ)
##### GGUF
- [TheBloke/Helion-4x34B-GGUF](https://huggingface.co/TheBloke/Helion-4x34B-GGUF)
##### AWQ
- [TheBloke/Helion-4x34B-AWQ](https://huggingface.co/TheBloke/Helion-4x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
mihael974/speecht5_finetuned_voxpopuli_hr
|
mihael974
| 2024-01-13T13:10:25Z | 63 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-11T23:22:35Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_hr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_hr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2343 | 25.64 | 1000 | 1.2588 |
| 1.2336 | 51.28 | 2000 | 1.2571 |
| 1.2338 | 76.92 | 3000 | 1.2542 |
| 1.2241 | 102.56 | 4000 | 1.2562 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CarlosChcn/Pieridae_Classifier
|
CarlosChcn
| 2024-01-13T13:00:13Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-01-13T12:58:51Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
yuanzhoulvpi/intermlm-7b-lml_001
|
yuanzhoulvpi
| 2024-01-13T12:55:23Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internlm",
"feature-extraction",
"text-generation",
"custom_code",
"zh",
"dataset:yuanzhoulvpi/rename_robot",
"region:us"
] |
text-generation
| 2024-01-13T00:21:07Z |
---
datasets:
- yuanzhoulvpi/rename_robot
language:
- zh
pipeline_tag: text-generation
---
1. 使用lora,给internlm模型做训练
2. 训练的时候,如何让模型知道自己的身份,并且对相关问题进行拒绝回答。这里给到相关解决方案。
## 模型效果
这里给大家看一下,使用我这个方法训练的模型效果:


可以看得出来:
1. 模型有非常明显的自我认知能力;
2. 模型懂得拒绝回答;
3. 模型对于别的问题,回答的也还可以;
## 训练脚本介绍
GitHub训练代码:[https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/internlm-sft](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/internlm-sft)
|
kar-saaragh/poca-SoccerTwos
|
kar-saaragh
| 2024-01-13T12:51:19Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-13T12:50:50Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kar-saaragh/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Elvira0/Art
|
Elvira0
| 2024-01-13T12:49:17Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-01-11T15:17:05Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
agustoslu/data_experiment
|
agustoslu
| 2024-01-13T12:45:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/mistral-7b-instruct-v0.1-sharded",
"base_model:adapter:ybelkada/mistral-7b-instruct-v0.1-sharded",
"region:us"
] | null | 2024-01-13T12:41:35Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/mistral-7b-instruct-v0.1-sharded
model-index:
- name: data_experiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data_experiment
This model is a fine-tuned version of [ybelkada/mistral-7b-instruct-v0.1-sharded](https://huggingface.co/ybelkada/mistral-7b-instruct-v0.1-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
peulsilva/phrase-bert-setfit-sst5-5shots
|
peulsilva
| 2024-01-13T12:39:12Z | 46 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-13T12:39:06Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# peulsilva/phrase-bert-setfit-sst5-5shots
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('peulsilva/phrase-bert-setfit-sst5-5shots')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('peulsilva/phrase-bert-setfit-sst5-5shots')
model = AutoModel.from_pretrained('peulsilva/phrase-bert-setfit-sst5-5shots')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peulsilva/phrase-bert-setfit-sst5-5shots)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 300 with parameters:
```
{'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': None}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Razafaheem/finetuning-sentiment-model-ophelia
|
Razafaheem
| 2024-01-13T12:37:24Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-13T11:13:34Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-ophelia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-ophelia
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2758
- Accuracy: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
SharonTudi/DIALOGUE_final_model
|
SharonTudi
| 2024-01-13T12:33:17Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-10T10:00:47Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DIALOGUE_final_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DIALOGUE_final_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0469
- Accuracy: 0.9902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0037 | 0.31 | 15 | 0.0496 | 0.9902 |
| 0.0013 | 0.62 | 30 | 0.0437 | 0.9902 |
| 0.0008 | 0.94 | 45 | 0.0431 | 0.9902 |
| 0.0006 | 1.25 | 60 | 0.0387 | 0.9902 |
| 0.0005 | 1.56 | 75 | 0.0447 | 0.9902 |
| 0.0004 | 1.88 | 90 | 0.0465 | 0.9902 |
| 0.0004 | 2.19 | 105 | 0.0890 | 0.9804 |
| 0.0003 | 2.5 | 120 | 0.1008 | 0.9804 |
| 0.0004 | 2.81 | 135 | 0.0469 | 0.9902 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jaeyoungk/yi-6b-ko-fin
|
jaeyoungk
| 2024-01-13T12:30:49Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:01-ai/Yi-6B",
"base_model:adapter:01-ai/Yi-6B",
"region:us"
] | null | 2024-01-13T12:30:35Z |
---
library_name: peft
base_model: 01-ai/Yi-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
anakib1/multi-whisper-playground
|
anakib1
| 2024-01-13T12:26:53Z | 99 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-12T19:15:50Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: multi-whisper-playground
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-whisper-playground
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8508
- Wer: 195.4146
- Acc: 12.6021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 2.2601 | 1.0 | 215 | 2.1670 | 210.5479 | 12.2520 |
| 1.8422 | 2.0 | 430 | 1.9717 | 187.2986 | 12.2520 |
| 1.6574 | 3.0 | 645 | 1.8943 | 162.3279 | 12.2520 |
| 1.4953 | 4.0 | 860 | 1.8562 | 187.3132 | 12.1354 |
| 1.4436 | 5.0 | 1075 | 1.8402 | 173.3153 | 12.2520 |
| 1.322 | 6.0 | 1290 | 1.8362 | 184.9692 | 12.4854 |
| 1.2386 | 7.0 | 1505 | 1.8372 | 183.5409 | 12.6021 |
| 1.1252 | 8.0 | 1720 | 1.8430 | 192.3747 | 12.6021 |
| 1.0449 | 9.0 | 1935 | 1.8459 | 198.8866 | 12.6021 |
| 1.0509 | 10.0 | 2150 | 1.8508 | 195.4146 | 12.6021 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
peulsilva/phrase-bert-setfit-sst5-2shots
|
peulsilva
| 2024-01-13T12:26:01Z | 47 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-13T12:25:54Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# peulsilva/phrase-bert-setfit-sst5-2shots
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('peulsilva/phrase-bert-setfit-sst5-2shots')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('peulsilva/phrase-bert-setfit-sst5-2shots')
model = AutoModel.from_pretrained('peulsilva/phrase-bert-setfit-sst5-2shots')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peulsilva/phrase-bert-setfit-sst5-2shots)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 45 with parameters:
```
{'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': None}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
quantus17/rise3
|
quantus17
| 2024-01-13T12:25:26Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-13T11:40:14Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a e6z7a armchair
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
gonmadri/laporta
|
gonmadri
| 2024-01-13T12:20:33Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-13T12:18:34Z |
---
license: other
license_name: modelo
license_link: LICENSE
---
|
Utkarsh02/testanimal
|
Utkarsh02
| 2024-01-13T12:15:16Z | 178 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-13T12:15:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: testanimal
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9666666388511658
---
# testanimal
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cat

#### Cow

#### Dog

#### Lion

#### Tiger

|
tranthaihoa/alpaca_Llama2_tuned
|
tranthaihoa
| 2024-01-13T11:58:43Z | 0 | 0 | null |
[
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b",
"base_model:finetune:unsloth/llama-2-7b",
"license:llama2",
"region:us"
] | null | 2024-01-12T09:50:50Z |
---
license: llama2
base_model: unsloth/llama-2-7b
tags:
- trl
- sft
- unsloth
- unsloth
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.14.1
|
sumangpt/falcon_oasst1
|
sumangpt
| 2024-01-13T11:55:20Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"custom_code",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2024-01-12T14:25:25Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
TheBloke/Bagel-Hermes-2x34b-GPTQ
|
TheBloke
| 2024-01-13T11:51:31Z | 14 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"base_model:Weyaxi/Bagel-Hermes-2x34B",
"base_model:quantized:Weyaxi/Bagel-Hermes-2x34B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-13T00:11:02Z |
---
base_model: Weyaxi/Bagel-Hermes-2x34b
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Bagel Hermes 2X34B
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- yi
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Bagel Hermes 2X34B - GPTQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Bagel Hermes 2X34B](https://huggingface.co/Weyaxi/Bagel-Hermes-2x34b)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Yağız Çalık's Bagel Hermes 2X34B](https://huggingface.co/Weyaxi/Bagel-Hermes-2x34b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Bagel-Hermes-2x34b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 31.84 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 32.99 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 36.50 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 24.35 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 25.45 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.99 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.97 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Bagel-Hermes-2x34b-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Bagel-Hermes-2x34b-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Bagel-Hermes-2x34b-GPTQ`:
```shell
mkdir Bagel-Hermes-2x34b-GPTQ
huggingface-cli download TheBloke/Bagel-Hermes-2x34b-GPTQ --local-dir Bagel-Hermes-2x34b-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Bagel-Hermes-2x34b-GPTQ
huggingface-cli download TheBloke/Bagel-Hermes-2x34b-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Bagel-Hermes-2x34b-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Bagel-Hermes-2x34b-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Bagel-Hermes-2x34b-GPTQ --local-dir Bagel-Hermes-2x34b-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Bagel-Hermes-2x34b-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Bagel-Hermes-2x34b-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Bagel-Hermes-2x34b-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Bagel-Hermes-2x34b-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Bagel-Hermes-2x34b-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Bagel-Hermes-2x34b-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's Bagel Hermes 2X34B

# Bagel-Hermes-2x34B
This is the model for Bagel-Hermes-2x34B. I used [mergekit](https://github.com/cg123/mergekit) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, and [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) uses ChatML, you can utilize ChatML and other prompt templates provided by bagel.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Bagel-Hermes-2x34B-GPTQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GPTQ)
##### GGUF
- [TheBloke/Bagel-Hermes-2x34B-GGUF](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-GGUF)
##### AWQ
- [TheBloke/Bagel-Hermes-2x34B-AWQ](https://huggingface.co/TheBloke/Bagel-Hermes-2x34B-AWQ)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
Weyaxi/Dolphin2.1-OpenOrca-7B
|
Weyaxi
| 2024-01-13T11:49:19Z | 1,558 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-11T09:23:18Z |
---
license: apache-2.0
model-index:
- name: Dolphin2.1-OpenOrca-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Dolphin2.1-OpenOrca-7B
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
Merge of [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge.
### *Weights*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3
### *Density*
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b): 0.5
- [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GPTQ)
##### GGUF
- [TheBloke/Dolphin2.1-OpenOrca-7B-GGUF](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-GGUF)
##### AWQ
- [TheBloke/Dolphin2.1-OpenOrca-7B-AWQ](https://huggingface.co/TheBloke/Dolphin2.1-OpenOrca-7B-AWQ)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Dolphin2.1-OpenOrca-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.47|
|AI2 Reasoning Challenge (25-Shot)|63.91|
|HellaSwag (10-Shot) |84.26|
|MMLU (5-Shot) |62.66|
|TruthfulQA (0-shot) |53.84|
|Winogrande (5-shot) |78.22|
|GSM8k (5-shot) |19.94|
|
Shijia/xlmroberta_clir_baseline
|
Shijia
| 2024-01-13T11:48:06Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-08T18:22:10Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmroberta_clir_baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta_clir_baseline
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0122
- Spearman Corr: 0.9098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 1.0 | 206 | 0.0294 | 0.6807 |
| 0.0427 | 2.0 | 413 | 0.0215 | 0.7735 |
| 0.0427 | 3.0 | 619 | 0.0183 | 0.8349 |
| 0.0209 | 4.0 | 826 | 0.0154 | 0.8695 |
| 0.0209 | 5.0 | 1032 | 0.0148 | 0.8854 |
| 0.0129 | 6.0 | 1239 | 0.0124 | 0.8944 |
| 0.0129 | 7.0 | 1445 | 0.0140 | 0.8977 |
| 0.0094 | 8.0 | 1652 | 0.0113 | 0.9070 |
| 0.0094 | 9.0 | 1858 | 0.0111 | 0.9099 |
| 0.0077 | 9.98 | 2060 | 0.0122 | 0.9098 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ
|
MaziyarPanahi
| 2024-01-13T11:43:28Z | 76 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1"
] |
text-generation
| 2024-01-12T21:58:42Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.1
inference: false
license: apache-2.0
model_creator: mistralai
model_name: Mistral-7B-Instruct-v0.1-GPTQ
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- safetensors
- mistral
- text-generation
- finetuned
- arxiv:2310.06825
- license:apache-2.0
- autotrain_compatible
- has_space
- text-generation-inference
- region:us
---
# Description
[MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ) is a quantized (GPTQ) version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python Code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Mistral-7B-Instruct-v0.1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
Wajid333/taxi
|
Wajid333
| 2024-01-13T11:37:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-13T11:37:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Wajid333/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
younoger/YGBNumbersBert-0.5
|
younoger
| 2024-01-13T11:24:27Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:younoger/autotrain-data-YGBNumbersBert-0.5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-13T11:24:18Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- younoger/autotrain-data-YGBNumbersBert-0.5
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.0019458403112366796
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
adhisetiawan/ViT-flowers-species
|
adhisetiawan
| 2024-01-13T11:14:33Z | 49 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-13T10:42:44Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: adhisetiawan/ViT-flowers-species
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# adhisetiawan/ViT-flowers-species
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0831
- Validation Loss: 0.1388
- Train Accuracy: 0.9605
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 14680, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7563 | 0.3186 | 0.9482 | 0 |
| 0.2194 | 0.2133 | 0.9496 | 1 |
| 0.1417 | 0.1802 | 0.9550 | 2 |
| 0.0973 | 0.1482 | 0.9605 | 3 |
| 0.0831 | 0.1388 | 0.9605 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.