modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
aj30/experiments-llama
|
aj30
| 2024-01-07T00:11:09Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:pubmed-dataset",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-07T00:11:00Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- pubmed-dataset
model-index:
- name: experiments-llama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiments-llama
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the pubmed-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.799 | 0.4 | 36 | 1.7739 |
| 1.6351 | 0.8 | 72 | 1.7412 |
| 1.6894 | 1.2 | 108 | 1.7203 |
| 1.7854 | 1.6 | 144 | 1.7165 |
| 1.8052 | 2.0 | 180 | 1.7158 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
dnoever/Falkor-7b-5.0bpw-exl2
|
dnoever
| 2024-01-07T00:06:17Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T00:02:47Z |
---
license: apache-2.0
---
# Falkor 7B
- RAG (dragon) Model
<img src="falkor.png" width="300">
Model merge between Chupacabra 7b v2.04 and dragon-mistral-7b-v0
- ---> [Theme Song](https://www.youtube.com/watch?v=lHytjEj7B9g) <---
# Original Model Card for dragon-mistral-7b-v0
<!-- Provide a quick summary of what the model is/does. -->
dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model.
DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **96.50** correct out of 100
--Not Found Classification: 92.50%
--Boolean: 97.50%
--Math/Logic: 81.25%
--Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal)
--Summarization Quality (1-5): 4 (Coherent, extractive)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Mistral-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7B-Base
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.
DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with dRAGon is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0")
model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0")
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1. Text Passage Context, and
2. Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
inputs = tokenizer(new_prompt, return_tensors="pt")
start_of_output = len(inputs.input_ids[0])
# temperature: set at 0.3 for consistency of output
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
outputs = model.generate(
inputs.input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100,
)
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
## Model Card Contact
Darren Oberst & llmware team
|
BEE-spoke-data/beecoder-220M-python
|
BEE-spoke-data
| 2024-01-07T00:06:09Z | 16 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"python",
"codegen",
"markdown",
"smol_llama",
"en",
"dataset:BEE-spoke-data/pypi_clean-deduped",
"dataset:bigcode/the-stack-smol-xl",
"dataset:EleutherAI/proof-pile-2",
"base_model:BEE-spoke-data/smol_llama-220M-GQA",
"base_model:finetune:BEE-spoke-data/smol_llama-220M-GQA",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T23:39:27Z |
---
license: apache-2.0
base_model: BEE-spoke-data/smol_llama-220M-GQA
datasets:
- BEE-spoke-data/pypi_clean-deduped
- bigcode/the-stack-smol-xl
- EleutherAI/proof-pile-2
language:
- en
tags:
- python
- codegen
- markdown
- smol_llama
metrics:
- accuracy
inference:
parameters:
max_new_tokens: 64
min_new_tokens: 8
do_sample: true
epsilon_cutoff: 0.0008
temperature: 0.3
top_p: 0.9
repetition_penalty: 1.02
no_repeat_ngram_size: 8
renormalize_logits: true
widget:
- text: |
def add_numbers(a, b):
return
example_title: Add Numbers Function
- text: |
class Car:
def __init__(self, make, model):
self.make = make
self.model = model
def display_car(self):
example_title: Car Class
- text: |
import pandas as pd
data = {'Name': ['Tom', 'Nick', 'John'], 'Age': [20, 21, 19]}
df = pd.DataFrame(data).convert_dtypes()
# eda
example_title: Pandas DataFrame
- text: |
def factorial(n):
if n == 0:
return 1
else:
example_title: Factorial Function
- text: |
def fibonacci(n):
if n <= 0:
raise ValueError("Incorrect input")
elif n == 1:
return 0
elif n == 2:
return 1
else:
example_title: Fibonacci Function
- text: |
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
# simple plot
example_title: Matplotlib Plot
- text: |
def reverse_string(s:str) -> str:
return
example_title: Reverse String Function
- text: |
def is_palindrome(word:str) -> bool:
return
example_title: Palindrome Function
- text: |
def bubble_sort(lst: list):
n = len(lst)
for i in range(n):
for j in range(0, n-i-1):
example_title: Bubble Sort Function
- text: |
def binary_search(arr, low, high, x):
if high >= low:
mid = (high + low) // 2
if arr[mid] == x:
return mid
elif arr[mid] > x:
example_title: Binary Search Function
pipeline_tag: text-generation
---
# BEE-spoke-data/beecoder-220M-python
This is `BEE-spoke-data/smol_llama-220M-GQA` fine-tuned for code generation on:
- filtered version of stack-smol-XL
- deduped version of 'algebraic stack' from proof-pile-2
- cleaned and deduped pypi (last dataset)
This model (and the base model) were both trained using ctx length 2048.
## examples
> Example script for inference testing: [here](https://gist.github.com/pszemraj/c7738f664a64b935a558974d23a7aa8c)
It has its limitations at 220M, but seems decent for single-line or docstring generation, and/or being used for speculative decoding for such purposes.

The screenshot is on CPU on a laptop.
---
|
clydelyde/math_books
|
clydelyde
| 2024-01-06T23:56:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"text-generation",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] |
text-generation
| 2023-12-15T04:56:30Z |
---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
CyberHarem/kotone_noda_sakuratrick
|
CyberHarem
| 2024-01-06T23:43:44Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kotone_noda_sakuratrick",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T23:36:09Z |
---
license: mit
datasets:
- CyberHarem/kotone_noda_sakuratrick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kotone_noda_sakuratrick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3740, you need to download `3740/kotone_noda_sakuratrick.pt` as the embedding and `3740/kotone_noda_sakuratrick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3740**, with the score of 0.975. The trigger words are:
1. `kotone_noda_sakuratrick`
2. `blush, long_hair, brown_hair, smile, brown_eyes, blonde_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.952 | [Download](5100/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/pattern_5.png) |  | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.969 | [Download](4760/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/pattern_5.png) |  | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.972 | [Download](4420/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/pattern_5.png) |  | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.953 | [Download](4080/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/pattern_5.png) |  | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| **3740** | **0.975** | [**Download**](3740/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/pattern_5.png) |  | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.902 | [Download](3400/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/pattern_5.png) |  | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.963 | [Download](3060/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/pattern_5.png) |  | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.895 | [Download](2720/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/pattern_5.png) |  | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.957 | [Download](2380/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/pattern_5.png) |  | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.888 | [Download](2040/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/pattern_5.png) |  | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.902 | [Download](1700/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/pattern_5.png) |  | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.902 | [Download](1360/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/pattern_5.png) |  | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.917 | [Download](1020/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/pattern_5.png) |  | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.812 | [Download](680/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/pattern_5.png) |  | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.356 | [Download](340/kotone_noda_sakuratrick.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/pattern_5.png) |  | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
TheBloke/Pallas-0.5-frankenmerge-GPTQ
|
TheBloke
| 2024-01-06T23:31:18Z | 31 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Mihaiii/Pallas-0.5-frankenmerge",
"base_model:quantized:Mihaiii/Pallas-0.5-frankenmerge",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-06T20:18:09Z |
---
base_model: Mihaiii/Pallas-0.5-frankenmerge
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-license
metrics:
- accuracy
model_creator: Mihai
model_name: Pallas 0.5 Frankenmerge
model_type: yi
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pallas 0.5 Frankenmerge - GPTQ
- Model creator: [Mihai](https://huggingface.co/Mihaiii)
- Original model: [Pallas 0.5 Frankenmerge](https://huggingface.co/Mihaiii/Pallas-0.5-frankenmerge)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Mihai's Pallas 0.5 Frankenmerge](https://huggingface.co/Mihaiii/Pallas-0.5-frankenmerge).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GGUF)
* [Mihai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mihaiii/Pallas-0.5-frankenmerge)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 19.44 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 20.12 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 22.18 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 15.69 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 37.02 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 17.65 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 37.83 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Pallas-0.5-frankenmerge-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Pallas-0.5-frankenmerge-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Pallas-0.5-frankenmerge-GPTQ`:
```shell
mkdir Pallas-0.5-frankenmerge-GPTQ
huggingface-cli download TheBloke/Pallas-0.5-frankenmerge-GPTQ --local-dir Pallas-0.5-frankenmerge-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Pallas-0.5-frankenmerge-GPTQ
huggingface-cli download TheBloke/Pallas-0.5-frankenmerge-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Pallas-0.5-frankenmerge-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Pallas-0.5-frankenmerge-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pallas-0.5-frankenmerge-GPTQ --local-dir Pallas-0.5-frankenmerge-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Pallas-0.5-frankenmerge-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Pallas-0.5-frankenmerge-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Pallas-0.5-frankenmerge-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Pallas-0.5-frankenmerge-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Pallas-0.5-frankenmerge-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Mihai's Pallas 0.5 Frankenmerge
This is a frankenmerge of [Mihaiii/Pallas-0.5](https://huggingface.co/Mihaiii/Pallas-0.5) .
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
Merge config:
```yaml
slices:
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [0, 60]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [58, 60]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [55, 56]
merge_method: passthrough
dtype: bfloat16
```
|
ostris/CLIP-ViT-H-14-448
|
ostris
| 2024-01-06T23:30:05Z | 14 | 2 |
transformers
|
[
"transformers",
"safetensors",
"clip_vision_model",
"zero-shot-image-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-01-06T23:05:18Z |
---
license: mit
library_name: transformers
pipeline_tag: zero-shot-image-classification
---
You probably do not need this unless you are training your own IP Adapters.
Modified version of the vision encoder of [CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) to handle 448 x 448 inputs
vs the original 224 x 224 inputs. It will probbaly not work for classification (as is), but will DIP work for for IP+ adapters that use CLIP-ViT-H, though they will need to be
fine tuned a little more.
Hidden layer outputs go from `(257, 1280)` to `(1025, 1280)`, which can be digested by the Resampler without modification or weight resizing.
|
bjyoo/llama2-qlora-finetunined-french
|
bjyoo
| 2024-01-06T23:25:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-06T23:25:08Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
jeiku/Bluemoon_cleaned_StableLM
|
jeiku
| 2024-01-06T23:16:49Z | 0 | 1 | null |
[
"safetensors",
"en",
"dataset:usernamedesu/bluemoon-rp-cleaned-jsonl",
"license:other",
"region:us"
] | null | 2024-01-06T23:15:16Z |
---
license: other
datasets:
- usernamedesu/bluemoon-rp-cleaned-jsonl
language:
- en
---
|
rbrgAlou/poca-SoccerTwos
|
rbrgAlou
| 2024-01-06T23:12:44Z | 32 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-06T23:12:20Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rbrgAlou/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TheBloke/Sensualize-Solar-10.7B-AWQ
|
TheBloke
| 2024-01-06T23:11:15Z | 8 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"base_model:Sao10K/Sensualize-Solar-10.7B",
"base_model:quantized:Sao10K/Sensualize-Solar-10.7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-06T22:53:40Z |
---
base_model: Sao10K/Sensualize-Solar-10.7B
inference: false
language:
- en
license: cc-by-nc-4.0
model_creator: Saofiq
model_name: Sensualize Solar 10.7B
model_type: solar
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Sensualize Solar 10.7B - AWQ
- Model creator: [Saofiq](https://huggingface.co/Sao10K)
- Original model: [Sensualize Solar 10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B)
<!-- description start -->
## Description
This repo contains AWQ model files for [Saofiq's Sensualize Solar 10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-GGUF)
* [Saofiq's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Sensualize-Solar-10.7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Sensualize-Solar-10.7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Sensualize-Solar-10.7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Sensualize-Solar-10.7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Sensualize-Solar-10.7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Sensualize-Solar-10.7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Saofiq's Sensualize Solar 10.7B
A finetune of Base Solar. Took 12 Hours or so on 2x RTX 6000 ADAs, this is an 8-bit LoRA.
This is meant to be a verbose, smart ERP model.
Experimental.
***
### Prompt Format: Alpaca
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
|
ntc-ai/SDXL-LoRA-slider.extremely-cozy
|
ntc-ai
| 2024-01-06T23:08:59Z | 26 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T23:08:56Z |
---
language:
- en
thumbnail: "images/evaluate/extremely cozy.../extremely cozy_17_3.0.png"
widget:
- text: extremely cozy
output:
url: images/extremely cozy_17_3.0.png
- text: extremely cozy
output:
url: images/extremely cozy_19_3.0.png
- text: extremely cozy
output:
url: images/extremely cozy_20_3.0.png
- text: extremely cozy
output:
url: images/extremely cozy_21_3.0.png
- text: extremely cozy
output:
url: images/extremely cozy_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "extremely cozy"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - extremely cozy (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/extremely cozy_17_-3.0.png" width=256 height=256 /> | <img src="images/extremely cozy_17_0.0.png" width=256 height=256 /> | <img src="images/extremely cozy_17_3.0.png" width=256 height=256 /> |
| <img src="images/extremely cozy_19_-3.0.png" width=256 height=256 /> | <img src="images/extremely cozy_19_0.0.png" width=256 height=256 /> | <img src="images/extremely cozy_19_3.0.png" width=256 height=256 /> |
| <img src="images/extremely cozy_20_-3.0.png" width=256 height=256 /> | <img src="images/extremely cozy_20_0.0.png" width=256 height=256 /> | <img src="images/extremely cozy_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
extremely cozy
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.extremely-cozy', weight_name='extremely cozy.safetensors', adapter_name="extremely cozy")
# Activate the LoRA
pipe.set_adapters(["extremely cozy"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, extremely cozy"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 910+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
CyberHarem/kaede_ikeno_sakuratrick
|
CyberHarem
| 2024-01-06T23:04:59Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/kaede_ikeno_sakuratrick",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T22:56:24Z |
---
license: mit
datasets:
- CyberHarem/kaede_ikeno_sakuratrick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kaede_ikeno_sakuratrick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5100, you need to download `5100/kaede_ikeno_sakuratrick.pt` as the embedding and `5100/kaede_ikeno_sakuratrick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5100**, with the score of 0.948. The trigger words are:
1. `kaede_ikeno_sakuratrick`
2. `short_hair, hair_ornament, hairclip, blush, black_hair, brown_hair, green_eyes, necktie`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5100** | **0.948** | [**Download**](5100/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.934 | [Download](4760/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.945 | [Download](4420/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.937 | [Download](4080/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.926 | [Download](3740/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.944 | [Download](3400/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.895 | [Download](3060/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.902 | [Download](2720/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.887 | [Download](2380/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.886 | [Download](2040/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.856 | [Download](1700/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.824 | [Download](1360/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.735 | [Download](1020/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.705 | [Download](680/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.495 | [Download](340/kaede_ikeno_sakuratrick.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
air217/distilhubert-finetuned-gtzan
|
air217
| 2024-01-06T22:57:36Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-04T19:52:49Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TheBloke/zephyr-quiklang-3b-4K-GGUF
|
TheBloke
| 2024-01-06T22:41:36Z | 114 | 4 |
transformers
|
[
"transformers",
"gguf",
"stablelm_epoch",
"causal_lm",
"text-generation",
"dataset:teknium/openhermes",
"base_model:Walmart-the-bag/zephyr-quiklang-3b-4K",
"base_model:quantized:Walmart-the-bag/zephyr-quiklang-3b-4K",
"license:other",
"region:us",
"conversational"
] |
text-generation
| 2024-01-06T22:39:21Z |
---
base_model: Walmart-the-bag/zephyr-quiklang-3b-4K
datasets:
- teknium/openhermes
inference: false
license: other
model_creator: wbag
model_name: Zephyr Quiklang 3B 4K
model_type: stablelm_epoch
pipeline_tag: text-generation
prompt_template: '<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
'
quantized_by: TheBloke
tags:
- causal_lm
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Zephyr Quiklang 3B 4K - GGUF
- Model creator: [wbag](https://huggingface.co/Walmart-the-bag)
- Original model: [Zephyr Quiklang 3B 4K](https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b-4K)
<!-- description start -->
## Description
This repo contains GGUF format model files for [wbag's Zephyr Quiklang 3B 4K](https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b-4K).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF)
* [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b-4K)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Zephyr
```
<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [zephyr-quiklang-3b-4k.Q2_K.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q2_K.gguf) | Q2_K | 2 | 1.20 GB| 3.70 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-quiklang-3b-4k.Q3_K_S.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q3_K_S.gguf) | Q3_K_S | 3 | 1.25 GB| 3.75 GB | very small, high quality loss |
| [zephyr-quiklang-3b-4k.Q3_K_M.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q3_K_M.gguf) | Q3_K_M | 3 | 1.39 GB| 3.89 GB | very small, high quality loss |
| [zephyr-quiklang-3b-4k.Q3_K_L.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q3_K_L.gguf) | Q3_K_L | 3 | 1.51 GB| 4.01 GB | small, substantial quality loss |
| [zephyr-quiklang-3b-4k.Q4_0.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q4_0.gguf) | Q4_0 | 4 | 1.61 GB| 4.11 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-quiklang-3b-4k.Q4_K_S.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q4_K_S.gguf) | Q4_K_S | 4 | 1.62 GB| 4.12 GB | small, greater quality loss |
| [zephyr-quiklang-3b-4k.Q4_K_M.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q4_K_M.gguf) | Q4_K_M | 4 | 1.71 GB| 4.21 GB | medium, balanced quality - recommended |
| [zephyr-quiklang-3b-4k.Q5_0.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q5_0.gguf) | Q5_0 | 5 | 1.94 GB| 4.44 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-quiklang-3b-4k.Q5_K_S.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q5_K_S.gguf) | Q5_K_S | 5 | 1.94 GB| 4.44 GB | large, low quality loss - recommended |
| [zephyr-quiklang-3b-4k.Q5_K_M.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q5_K_M.gguf) | Q5_K_M | 5 | 1.99 GB| 4.49 GB | large, very low quality loss - recommended |
| [zephyr-quiklang-3b-4k.Q6_K.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q6_K.gguf) | Q6_K | 6 | 2.30 GB| 4.80 GB | very large, extremely low quality loss |
| [zephyr-quiklang-3b-4k.Q8_0.gguf](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF/blob/main/zephyr-quiklang-3b-4k.Q8_0.gguf) | Q8_0 | 8 | 2.97 GB| 5.47 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/zephyr-quiklang-3b-4K-GGUF and below it, a specific filename to download, such as: zephyr-quiklang-3b-4k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/zephyr-quiklang-3b-4K-GGUF zephyr-quiklang-3b-4k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/zephyr-quiklang-3b-4K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/zephyr-quiklang-3b-4K-GGUF zephyr-quiklang-3b-4k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m zephyr-quiklang-3b-4k.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|system|>\n{system_message}</s>\n<|user|>\n{prompt}</s>\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./zephyr-quiklang-3b-4k.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|system|>\n{system_message}</s>\n<|user|>\n{prompt}</s>\n<|assistant|>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./zephyr-quiklang-3b-4k.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: wbag's Zephyr Quiklang 3B 4K
# Description
This is the 4K version of https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b with 1000 more samples of openhermes.
# Original Model Description
This is a finetune of [StableLM-Zephyr-3B](https://huggingface.co/stabilityai/stablelm-zephyr-3b) with 2 datasets, toxic-dpo and openhermes with 10000 samples.
# Training Parameters
- 1xA6000-48GB
- batch_size: 6
- learning_rate: 5e-5
# Datasets:
- unalignment/toxic-dpo-v0.1
- teknium/openhermes
# Metrics/Basic Eval:
"predict_bleu-4": 31.594154999999997,
"predict_rouge-1": 44.092935,
"predict_rouge-2": 22.276081000000005,
"predict_rouge-l": 34.506909,
"predict_runtime": 121.7549,
"predict_samples_per_second": 0.821,
"predict_steps_per_second": 0.107
<!-- original-model-card end -->
|
alirzb/S5_M1_fold4_BEiT_42621847
|
alirzb
| 2024-01-06T22:30:09Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T20:44:04Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S5_M1_fold4_BEiT_42621847
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold4_BEiT_42621847
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.007 | 1.0 | 368 | 0.0027 | 1.0 |
| 0.0132 | 2.0 | 737 | 0.0019 | 1.0 |
| 0.0255 | 3.0 | 1105 | 0.0053 | 0.9976 |
| 0.0001 | 4.0 | 1474 | 0.0060 | 0.9976 |
| 0.0001 | 4.99 | 1840 | 0.0018 | 0.9992 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Felladrin/onnx-Pythia-31M-Chat-v1
|
Felladrin
| 2024-01-06T22:24:11Z | 9 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"gpt_neox",
"text-generation",
"conversational",
"en",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:databricks/databricks-dolly-15k",
"dataset:THUDM/webglm-qa",
"dataset:starfishmedical/webGPT_x_dolly",
"dataset:Amod/mental_health_counseling_conversations",
"dataset:sablo/oasst2_curated",
"dataset:cognitivecomputations/wizard_vicuna_70k_unfiltered",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:Felladrin/Pythia-31M-Chat-v1",
"base_model:quantized:Felladrin/Pythia-31M-Chat-v1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-06T17:11:24Z |
---
license: apache-2.0
library_name: "transformers.js"
base_model: Felladrin/Pythia-31M-Chat-v1
language:
- en
datasets:
- totally-not-an-llm/EverythingLM-data-V3
- databricks/databricks-dolly-15k
- THUDM/webglm-qa
- starfishmedical/webGPT_x_dolly
- Amod/mental_health_counseling_conversations
- sablo/oasst2_curated
- cognitivecomputations/wizard_vicuna_70k_unfiltered
- mlabonne/chatml_dpo_pairs
---
INT8 ONNX version of [Felladrin/Pythia-31M-Chat-v1](https://huggingface.co/Felladrin/Pythia-31M-Chat-v1) to use with [Transformers.js](https://huggingface.co/docs/transformers.js).
|
s3nh/s3nh-Medicine-Noromaid-13b-GGUF
|
s3nh
| 2024-01-06T22:21:11Z | 69 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T18:26:55Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/s3nh/Medicine-Noromaid-13b).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
malhajar/Mixtral-8x7B-v0.1-turkish
|
malhajar
| 2024-01-06T22:20:19Z | 17 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mixtral",
"text-generation",
"tr",
"dataset:malhajar/alpaca-gpt4-tr",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T15:58:39Z |
---
datasets:
- malhajar/alpaca-gpt4-tr
language:
- tr
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
malhajar/Mixtral-8x7B-v0.1-turkish is a finetuned version of Mixtral-8x7B-v0.1 using SFT Training.
This model can answer information in turkish language as it is finetuned on a turkish dataset specifically [`alpaca-gpt4-tr`]( https://huggingface.co/datasets/malhajar/alpaca-gpt4-tr)
### Model Description
- **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
- **Language(s) (NLP):** Turkish
- **Finetuned from model:** [`mistralai/Mixtral-8x7B-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## How to Get Started with the Model
Use the code sample provided in the original post to interact with the model.
```python
from transformers import AutoTokenizer,AutoModelForCausalLM
model_id = "malhajar/Mixtral-8x7B-v0.1-turkish"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
torch_dtype=torch.float16,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_id)
question: "Türkiyenin en büyük şehir nedir?"
# For generating a response
prompt = f'''
### Instruction: {question} ### Response:
'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
top_p=0.95,trust_remote_code=True,)
response = tokenizer.decode(output[0])
print(response)
```
|
alirzb/S5_M1_fold5_BEiT_42621849
|
alirzb
| 2024-01-06T22:15:43Z | 147 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T20:50:24Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S5_M1_fold5_BEiT_42621849
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold5_BEiT_42621849
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0023 | 1.0 | 368 | 0.0052 | 0.9984 |
| 0.0201 | 2.0 | 737 | 0.0208 | 0.9952 |
| 0.0 | 3.0 | 1105 | 0.0257 | 0.9936 |
| 0.0007 | 4.0 | 1474 | 0.0005 | 1.0 |
| 0.0001 | 4.99 | 1840 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
alirzb/S5_M1_fold2_BEiT_42621842
|
alirzb
| 2024-01-06T21:49:32Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T20:04:15Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S5_M1_fold2_BEiT_42621842
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold2_BEiT_42621842
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0283 | 1.0 | 368 | 0.0190 | 0.9960 |
| 0.014 | 2.0 | 737 | 0.0153 | 0.9960 |
| 0.0044 | 3.0 | 1105 | 0.0032 | 0.9992 |
| 0.0019 | 4.0 | 1474 | 0.0130 | 0.9976 |
| 0.0 | 4.99 | 1840 | 0.0008 | 0.9992 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
alirzb/S5_M1_fold1_BEiT_42621837
|
alirzb
| 2024-01-06T21:41:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T20:04:16Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S5_M1_fold1_BEiT_42621837
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold1_BEiT_42621837
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0098
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.012 | 1.0 | 368 | 0.0211 | 0.9944 |
| 0.0349 | 2.0 | 737 | 0.0147 | 0.9960 |
| 0.0014 | 3.0 | 1105 | 0.0075 | 0.9976 |
| 0.0001 | 4.0 | 1474 | 0.0071 | 0.9984 |
| 0.0065 | 4.99 | 1840 | 0.0098 | 0.9976 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Mihaiii/Pallas-0.5-frankenmerge
|
Mihaiii
| 2024-01-06T21:36:26Z | 21 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Mihaiii/Pallas-0.5",
"base_model:finetune:Mihaiii/Pallas-0.5",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-05T01:21:37Z |
---
base_model: Mihaiii/Pallas-0.5
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
This is a frankenmerge of [Mihaiii/Pallas-0.5](https://huggingface.co/Mihaiii/Pallas-0.5) .
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
Merge config:
```yaml
slices:
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [0, 60]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [58, 60]
- sources:
- model: "Mihaiii/Pallas-0.5"
layer_range: [55, 56]
merge_method: passthrough
dtype: bfloat16
```
Quants:
[TheBloke/Pallas-0.5-frankenmerge-GGUF](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GGUF)
[TheBloke/Pallas-0.5-frankenmerge-GPTQ](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ)
[TheBloke/Pallas-0.5-frankenmerge-AWQ](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-AWQ)
|
yy0514/llama2-7b-qlora-lek-train-4-epochs
|
yy0514
| 2024-01-06T21:36:25Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-06T20:46:04Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-7b-qlora-lek-train-4-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-qlora-lek-train-4-epochs
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-indef-anan-seed_1024-3e-4
|
kanishka
| 2024-01-06T21:21:11Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T22:53:42Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-indef-anan-seed_1024-3e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-indef-anan-seed_1024-3e-4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4221
- Accuracy: 0.4088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 64
- seed: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.7362 | 1.0 | 18595 | 3.8958 | 0.3477 |
| 3.4364 | 2.0 | 37190 | 3.6265 | 0.3748 |
| 3.2911 | 3.0 | 55785 | 3.5278 | 0.3892 |
| 3.2068 | 4.0 | 74380 | 3.4636 | 0.3952 |
| 3.1446 | 5.0 | 92975 | 3.4225 | 0.3997 |
| 3.1034 | 6.0 | 111570 | 3.4072 | 0.4020 |
| 3.0633 | 7.0 | 130165 | 3.3788 | 0.4040 |
| 3.0315 | 8.0 | 148760 | 3.3888 | 0.4051 |
| 2.9995 | 9.0 | 167355 | 3.3858 | 0.4065 |
| 2.971 | 10.0 | 185950 | 3.3903 | 0.4062 |
| 2.9521 | 11.0 | 204545 | 3.3803 | 0.4079 |
| 2.9281 | 12.0 | 223140 | 3.3880 | 0.4080 |
| 2.9058 | 13.0 | 241735 | 3.3740 | 0.4083 |
| 2.884 | 14.0 | 260330 | 3.3773 | 0.4087 |
| 2.8641 | 15.0 | 278925 | 3.3963 | 0.4087 |
| 2.847 | 16.0 | 297520 | 3.4004 | 0.4085 |
| 2.8249 | 17.0 | 316115 | 3.4037 | 0.4089 |
| 2.8045 | 18.0 | 334710 | 3.4124 | 0.4087 |
| 2.7866 | 19.0 | 353305 | 3.4144 | 0.4090 |
| 2.7746 | 20.0 | 371900 | 3.4221 | 0.4088 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.14.1
|
thierryteisseire/TinyLlama-1.1B-Chat-v1.0-fine-tuned
|
thierryteisseire
| 2024-01-06T21:19:17Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-06T17:00:59Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
thierryteisseire/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters
|
thierryteisseire
| 2024-01-06T21:18:30Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-06T17:00:43Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Danielbrdz/Barcenas-Mixtral-8x7b-4bit
|
Danielbrdz
| 2024-01-06T21:01:49Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mistral",
"conversational",
"en",
"es",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-06T19:25:50Z |
---
inference: false
language:
- en
- es
library_name: transformers
license: apache-2.0
model_type: mixtral
pipeline_tag: text-generation
tags:
- mistral
- mixtral
---
Barcenas Mixtral 8x7b based on argilla/notux-8x7b-v1
It is a 4-bit version of this model to make it more accessible to users
Trained with DPO and using MoE Technology makes it a powerful and innovative model.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
|
bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2
|
bartowski
| 2024-01-06T21:00:48Z | 0 | 1 | null |
[
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"laser",
"text-generation",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-06T20:41:25Z |
---
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
- laser
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of NeuralHermes-2.5-Mistral-7B-laser
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralHermes-2.5-Mistral-7B-laser-exl2`:
```shell
mkdir NeuralHermes-2.5-Mistral-7B-laser-exl2
huggingface-cli download bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2 --local-dir NeuralHermes-2.5-Mistral-7B-laser-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir NeuralHermes-2.5-Mistral-7B-laser-exl2
huggingface-cli download bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2 --revision 4_0 --local-dir NeuralHermes-2.5-Mistral-7B-laser-exl2 --local-dir-use-symlinks False
```
|
TheBloke/bagel-8x7b-v0.2-AWQ
|
TheBloke
| 2024-01-06T20:58:52Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:ai2_arc",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"base_model:jondurbin/bagel-8x7b-v0.2",
"base_model:quantized:jondurbin/bagel-8x7b-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-06T20:27:52Z |
---
base_model: jondurbin/bagel-8x7b-v0.2
datasets:
- ai2_arc
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
inference: false
license: apache-2.0
model_creator: Jon Durbin
model_name: Bagel 8X7B v0.2
model_type: mixtral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Bagel 8X7B v0.2 - AWQ
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2)
<!-- description start -->
## Description
This repo contains AWQ model files for [Jon Durbin's Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2).
**MIXTRAL AWQ**
This is a Mixtral AWQ model.
For AutoAWQ inference, please install AutoAWQ 0.1.8 or later.
Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git`
vLLM: version 0.2.6 is confirmed to support Mixtral AWQs.
TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!)
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
AWQ models are supported by (note that not all of these may support Mixtral models yet - see above):
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/bagel-8x7b-v0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.65 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/bagel-8x7b-v0.2-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `bagel-8x7b-v0.2-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/bagel-8x7b-v0.2-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/bagel-8x7b-v0.2-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/bagel-8x7b-v0.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/bagel-8x7b-v0.2-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jon Durbin's Bagel 8X7B v0.2
# A bagel, with everything (except DPO)

## Overview
An experimental fine-tune of [mixtral-8x7b-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [bagel](https://github.com/jondurbin/bagel)
This is the model after the SFT phase, before DPO has been applied.
Hardware kindly provided by [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon)
### Data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned)
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- Multi-turn dataset used to create the capybara models.
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [emobank](https://github.com/JULIELab/EmoBank)
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT)
- Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## How to easily download and use this model
[Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI.
1) For this model rent the [Jon Durbin 4xA6000](https://shop.massedcompute.com/products/jon-durbin-4x-a6000?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) Virtual Machine
2) After you start your rental you will receive an email with instructions on how to Login to the VM
3) Once inside the VM, open the terminal and run `conda activate text-generation-inference`
4) Then `cd Desktop/text-generation-inference/`
5) Run `volume=$PWD/data`
6) Run`model=jondurbin/bagel-8x7b-v0.2`
7) `sudo docker run --gpus '"device=0,1,2,3"' --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model`
8) The model will take some time to load...
9) Once loaded the model will be available on port 8080
Sample command within the VM
```
curl 0.0.0.0:8080/generate \
-X POST \
-d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\
-H 'Content-Type: application/json'
```
You can also access the model from outside the VM
```
curl IP_ADDRESS_PROVIDED_BY_MASSED_COMPUTE_VM:8080/generate \
-X POST \
-d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\
-H 'Content-Type: application/json
```
For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA)
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Default via chat template
The model's `tokenizer_config.json` includes the default chat template (llama-2), so you can simply use the `apply_chat_template` method to build the full prompt.
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/bagel-8x7b-v0.2')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Contribute
If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and either make a PR or open an issue with details.
To help me with the fine-tuning costs (which are extremely expensive for these large combined datasets):
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Guide for certain tasks
#### RA(G)/contextual question answering
The model was trained to ignore what it thinks it knows, and uses the context to answer the questions, when using the format below.
The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a contextual prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Fine-tuning information
You can find charts, and the full configuration used to fine-tune this model on [weights and biases](https://wandb.ai/jondurbin/bagel-8x7b-v0.2/runs/agxjjdso?workspace=user-jondurbin)
The model was fine-tuned on an 8x a6000 instance, for 4 days, 15 hours, 6 minutes and 42 seconds.
### Licence and usage restrictions
The base model is mixtral-8x7b-v0.1, which is licensed as apache-2.0 - no issues there.
The fine-tuning data, however, includes several datasets that have data generated at least in part by OpenAI's gpt-4.
I am not a lawyer, so I can't help determine if this is actually commercially viable, but some questions that often come up are:
- Does the OpenAI ToS apply only to the user who created the dataset initially, and not subsequent models?
- If the dataset was released under a permissive license, but actually includes OpenAI generated data, does that ToS supersede the license?
- Does the dataset fall completely under fair use anyways, since the model isn't really capable of reproducing the entire training set verbatim?
Use your best judgement and seek legal advice if you are concerned about the terms. In any case, by using this model, you agree to completely indemnify me.
|
SaladSlayer00/twin_matcher_beta
|
SaladSlayer00
| 2024-01-06T20:55:51Z | 19 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"resnet",
"image-classification",
"generated_from_keras_callback",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T19:08:06Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_keras_callback
model-index:
- name: SaladSlayer00/twin_matcher_beta
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SaladSlayer00/twin_matcher_beta
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0286
- Validation Loss: 1.1866
- Validation Accuracy: 0.7159
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:-----:|
| 7.0814 | 4.8848 | 0.0133 | 0 |
| 4.6679 | 4.5568 | 0.0666 | 1 |
| 4.3536 | 4.1337 | 0.1221 | 2 |
| 3.8915 | 3.6650 | 0.2053 | 3 |
| 3.4256 | 3.2568 | 0.2597 | 4 |
| 3.0033 | 2.8885 | 0.3185 | 5 |
| 2.6252 | 2.5913 | 0.3973 | 6 |
| 2.2829 | 2.3391 | 0.4406 | 7 |
| 1.9821 | 2.1352 | 0.4928 | 8 |
| 1.7076 | 1.9428 | 0.5250 | 9 |
| 1.4693 | 1.8008 | 0.5627 | 10 |
| 1.2464 | 1.6763 | 0.5949 | 11 |
| 1.0552 | 1.5872 | 0.6093 | 12 |
| 0.9105 | 1.4840 | 0.6238 | 13 |
| 0.7595 | 1.4117 | 0.6426 | 14 |
| 0.6390 | 1.3601 | 0.6582 | 15 |
| 0.5328 | 1.3283 | 0.6548 | 16 |
| 0.4539 | 1.2958 | 0.6681 | 17 |
| 0.3655 | 1.2470 | 0.6715 | 18 |
| 0.3183 | 1.2389 | 0.6770 | 19 |
| 0.2597 | 1.2309 | 0.6792 | 20 |
| 0.2269 | 1.2193 | 0.6881 | 21 |
| 0.1750 | 1.2206 | 0.6781 | 22 |
| 0.1553 | 1.1853 | 0.6970 | 23 |
| 0.1313 | 1.1949 | 0.6781 | 24 |
| 0.1058 | 1.1935 | 0.6870 | 25 |
| 0.0903 | 1.2042 | 0.6859 | 26 |
| 0.0762 | 1.1950 | 0.6948 | 27 |
| 0.0654 | 1.1798 | 0.7037 | 28 |
| 0.0588 | 1.1955 | 0.6959 | 29 |
| 0.0488 | 1.1788 | 0.7048 | 30 |
| 0.0444 | 1.1845 | 0.7037 | 31 |
| 0.0374 | 1.1969 | 0.7026 | 32 |
| 0.0327 | 1.1907 | 0.7048 | 33 |
| 0.0286 | 1.1866 | 0.7159 | 34 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
alirzb/S2_M1_R3_BEiT_42621830
|
alirzb
| 2024-01-06T20:49:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T19:37:38Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S2_M1_R3_BEiT_42621830
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S2_M1_R3_BEiT_42621830
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0088 | 1.0 | 307 | 0.0336 | 0.9942 |
| 0.011 | 2.0 | 614 | 0.0439 | 0.9932 |
| 0.0009 | 3.0 | 921 | 0.0163 | 0.9961 |
| 0.003 | 4.0 | 1229 | 0.0130 | 0.9971 |
| 0.0001 | 5.0 | 1535 | 0.0128 | 0.9981 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
CyberHarem/haruka_takayama_sakuratrick
|
CyberHarem
| 2024-01-06T20:40:51Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/haruka_takayama_sakuratrick",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T20:27:58Z |
---
license: mit
datasets:
- CyberHarem/haruka_takayama_sakuratrick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of haruka_takayama_sakuratrick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 9100, you need to download `9100/haruka_takayama_sakuratrick.pt` as the embedding and `9100/haruka_takayama_sakuratrick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 9100**, with the score of 0.899. The trigger words are:
1. `haruka_takayama_sakuratrick`
2. `blush, brown_hair, long_hair, ribbon, hair_ribbon, necktie, purple_eyes, red_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------|
| 10500 | 0.874 | [Download](10500/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10500/previews/nude.png) | [<NSFW, click to see>](10500/previews/nude2.png) |  |  |
| 9800 | 0.827 | [Download](9800/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9800/previews/nude.png) | [<NSFW, click to see>](9800/previews/nude2.png) |  |  |
| **9100** | **0.899** | [**Download**](9100/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9100/previews/nude.png) | [<NSFW, click to see>](9100/previews/nude2.png) |  |  |
| 8400 | 0.877 | [Download](8400/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| 7700 | 0.870 | [Download](7700/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7700/previews/nude.png) | [<NSFW, click to see>](7700/previews/nude2.png) |  |  |
| 7000 | 0.815 | [Download](7000/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6300 | 0.844 | [Download](6300/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6300/previews/nude.png) | [<NSFW, click to see>](6300/previews/nude2.png) |  |  |
| 5600 | 0.806 | [Download](5600/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 4900 | 0.795 | [Download](4900/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4900/previews/nude.png) | [<NSFW, click to see>](4900/previews/nude2.png) |  |  |
| 4200 | 0.739 | [Download](4200/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3500 | 0.776 | [Download](3500/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 2800 | 0.739 | [Download](2800/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2100 | 0.739 | [Download](2100/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2100/previews/nude.png) | [<NSFW, click to see>](2100/previews/nude2.png) |  |  |
| 1400 | 0.594 | [Download](1400/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [<NSFW, click to see>](1400/previews/nude2.png) |  |  |
| 700 | 0.512 | [Download](700/haruka_takayama_sakuratrick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [<NSFW, click to see>](700/previews/nude2.png) |  |  |
|
TheBloke/bagel-8x7b-v0.2-GGUF
|
TheBloke
| 2024-01-06T20:39:31Z | 50 | 8 |
transformers
|
[
"transformers",
"gguf",
"mixtral",
"dataset:ai2_arc",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"dataset:nvidia/HelpSteer",
"dataset:Intel/orca_dpo_pairs",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned",
"dataset:LDJnr/Capybara",
"dataset:JULIELab/EmoBank",
"dataset:kingbri/PIPPA-shareGPT",
"base_model:jondurbin/bagel-8x7b-v0.2",
"base_model:quantized:jondurbin/bagel-8x7b-v0.2",
"license:apache-2.0",
"region:us",
"conversational"
] | null | 2024-01-06T20:27:52Z |
---
base_model: jondurbin/bagel-8x7b-v0.2
datasets:
- ai2_arc
- jondurbin/airoboros-3.2
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
inference: false
license: apache-2.0
model_creator: Jon Durbin
model_name: Bagel 8X7B v0.2
model_type: mixtral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Bagel 8X7B v0.2 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/bagel-8x7b-v0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [bagel-8x7b-v0.2.Q2_K.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes |
| [bagel-8x7b-v0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss |
| [bagel-8x7b-v0.2.Q4_0.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [bagel-8x7b-v0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended |
| [bagel-8x7b-v0.2.Q5_0.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [bagel-8x7b-v0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended |
| [bagel-8x7b-v0.2.Q6_K.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
| [bagel-8x7b-v0.2.Q8_0.gguf](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF/blob/main/bagel-8x7b-v0.2.Q8_0.gguf) | Q8_0 | 8 | 49.63 GB| 52.13 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/bagel-8x7b-v0.2-GGUF and below it, a specific filename to download, such as: bagel-8x7b-v0.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/bagel-8x7b-v0.2-GGUF bagel-8x7b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/bagel-8x7b-v0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/bagel-8x7b-v0.2-GGUF bagel-8x7b-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m bagel-8x7b-v0.2.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./bagel-8x7b-v0.2.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./bagel-8x7b-v0.2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Bagel 8X7B v0.2
# A bagel, with everything (except DPO)

## Overview
An experimental fine-tune of [mixtral-8x7b-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [bagel](https://github.com/jondurbin/bagel)
This is the model after the SFT phase, before DPO has been applied.
Hardware kindly provided by [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon)
### Data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned)
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- Multi-turn dataset used to create the capybara models.
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [emobank](https://github.com/JULIELab/EmoBank)
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT)
- Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## How to easily download and use this model
[Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI.
1) For this model rent the [Jon Durbin 4xA6000](https://shop.massedcompute.com/products/jon-durbin-4x-a6000?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) Virtual Machine
2) After you start your rental you will receive an email with instructions on how to Login to the VM
3) Once inside the VM, open the terminal and run `conda activate text-generation-inference`
4) Then `cd Desktop/text-generation-inference/`
5) Run `volume=$PWD/data`
6) Run`model=jondurbin/bagel-8x7b-v0.2`
7) `sudo docker run --gpus '"device=0,1,2,3"' --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model`
8) The model will take some time to load...
9) Once loaded the model will be available on port 8080
Sample command within the VM
```
curl 0.0.0.0:8080/generate \
-X POST \
-d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\
-H 'Content-Type: application/json'
```
You can also access the model from outside the VM
```
curl IP_ADDRESS_PROVIDED_BY_MASSED_COMPUTE_VM:8080/generate \
-X POST \
-d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\
-H 'Content-Type: application/json
```
For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA)
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Default via chat template
The model's `tokenizer_config.json` includes the default chat template (llama-2), so you can simply use the `apply_chat_template` method to build the full prompt.
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/bagel-8x7b-v0.2')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Contribute
If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and either make a PR or open an issue with details.
To help me with the fine-tuning costs (which are extremely expensive for these large combined datasets):
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Guide for certain tasks
#### RA(G)/contextual question answering
The model was trained to ignore what it thinks it knows, and uses the context to answer the questions, when using the format below.
The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a contextual prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Fine-tuning information
You can find charts, and the full configuration used to fine-tune this model on [weights and biases](https://wandb.ai/jondurbin/bagel-8x7b-v0.2/runs/agxjjdso?workspace=user-jondurbin)
The model was fine-tuned on an 8x a6000 instance, for 4 days, 15 hours, 6 minutes and 42 seconds.
### Licence and usage restrictions
The base model is mixtral-8x7b-v0.1, which is licensed as apache-2.0 - no issues there.
The fine-tuning data, however, includes several datasets that have data generated at least in part by OpenAI's gpt-4.
I am not a lawyer, so I can't help determine if this is actually commercially viable, but some questions that often come up are:
- Does the OpenAI ToS apply only to the user who created the dataset initially, and not subsequent models?
- If the dataset was released under a permissive license, but actually includes OpenAI generated data, does that ToS supersede the license?
- Does the dataset fall completely under fair use anyways, since the model isn't really capable of reproducing the entire training set verbatim?
Use your best judgement and seek legal advice if you are concerned about the terms. In any case, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
alirzb/S1_M1_R3_BEiT_42621220
|
alirzb
| 2024-01-06T20:27:10Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T18:33:37Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S1_M1_R3_BEiT_42621220
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R3_BEiT_42621220
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0131
- Accuracy: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0088 | 1.0 | 379 | 0.0077 | 0.9977 |
| 0.025 | 2.0 | 759 | 0.0080 | 0.9984 |
| 0.0362 | 3.0 | 1139 | 0.0049 | 0.9992 |
| 0.0007 | 4.0 | 1519 | 0.0146 | 0.9977 |
| 0.0 | 4.99 | 1895 | 0.0131 | 0.9977 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
ostapeno/newt_adaNeo1B_ropes_read_background_situation_sbs0.5_svdemb_sgd_full_ft_finegrained
|
ostapeno
| 2024-01-06T19:52:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-06T14:25:11Z |
Number of experts present in the library: 4
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| ropes_read_background_situation_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| ropes_read_background_situation_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| ropes_read_background_situation_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
Last updated on: 2024-01-06 19:52:01+00:00
|
alirzb/S5_M1_fold5_ViT_42621111
|
alirzb
| 2024-01-06T19:48:22Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T18:16:27Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S5_M1_fold5_ViT_42621111
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold5_ViT_42621111
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0042
- Accuracy: 0.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0311 | 1.0 | 368 | 0.0044 | 0.9992 |
| 0.0045 | 2.0 | 737 | 0.0014 | 0.9992 |
| 0.0038 | 3.0 | 1105 | 0.0068 | 0.9984 |
| 0.0001 | 4.0 | 1474 | 0.0041 | 0.9984 |
| 0.0 | 4.99 | 1840 | 0.0042 | 0.9984 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
ostapeno/newt_adaNeo1B_super_glue_cb_1_0_2_sbs0.5_svdemb_sgd_full_ft_coarsegrained
|
ostapeno
| 2024-01-06T19:46:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-06T18:42:08Z |
Number of experts present in the library: 8
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| super_glue_cb_1_0_2_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2_v6 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2_v7 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
Last updated on: 2024-01-06 19:46:13+00:00
|
dnoever/zephyr-7b-beta-exl2
|
dnoever
| 2024-01-06T19:40:49Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-beta",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T19:15:17Z |
---
license: mit
datasets:
- open-llm-leaderboard/details_HuggingFaceH4__zephyr-7b-beta
language:
- en
---
|
bartowski/Kunoichi-7B-exl2
|
bartowski
| 2024-01-06T19:33:13Z | 0 | 0 | null |
[
"merge",
"text-generation",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2024-01-06T17:36:11Z |
---
license: cc-by-nc-4.0
tags:
- merge
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Kunoichi-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/SanjiWatsuki/Kunoichi-7B
<a href="https://huggingface.co/bartowski/Kunoichi-7B-exl2/tree/3_5">3.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/Kunoichi-7B-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Kunoichi-7B-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Kunoichi-7B-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/Kunoichi-7B-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Kunoichi-7B-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Kunoichi-7B-exl2`:
```shell
mkdir Kunoichi-7B-exl2
huggingface-cli download bartowski/Kunoichi-7B-exl2 --local-dir Kunoichi-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Kunoichi-7B-exl2
huggingface-cli download bartowski/Kunoichi-7B-exl2 --revision 4_0 --local-dir Kunoichi-7B-exl2 --local-dir-use-symlinks False
```
|
alirzb/S2_M1_R1_BEiT_42621224
|
alirzb
| 2024-01-06T19:28:05Z | 147 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T18:34:36Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S2_M1_R1_BEiT_42621224
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S2_M1_R1_BEiT_42621224
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0388 | 1.0 | 231 | 0.0141 | 0.9949 |
| 0.0418 | 2.0 | 463 | 0.0076 | 0.9987 |
| 0.0004 | 3.0 | 694 | 0.0002 | 1.0 |
| 0.0044 | 4.0 | 926 | 0.0003 | 1.0 |
| 0.0001 | 4.99 | 1155 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Beybars/videomae-base-finetuned-ucf101-subset
|
Beybars
| 2024-01-06T19:13:54Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-01-06T18:07:27Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2061
- Accuracy: 0.9484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9543 | 0.12 | 75 | 1.5145 | 0.6 |
| 0.7175 | 1.12 | 150 | 0.9735 | 0.6571 |
| 0.2914 | 2.12 | 225 | 0.4764 | 0.8 |
| 0.54 | 3.12 | 300 | 0.6036 | 0.8 |
| 0.1074 | 4.12 | 375 | 0.1995 | 0.9286 |
| 0.0087 | 5.12 | 450 | 0.1256 | 0.9714 |
| 0.0061 | 6.12 | 525 | 0.0466 | 0.9857 |
| 0.0082 | 7.12 | 600 | 0.0363 | 0.9857 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jackoyoungblood/GPT2_Original2
|
jackoyoungblood
| 2024-01-06T19:11:54Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T19:11:23Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: GPT2_Original2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2_Original2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 150
- eval_batch_size: 150
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1200
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ladoza03/xlm-roberta-base-finetuned-panx-nl
|
ladoza03
| 2024-01-06T18:48:14Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-06T18:36:15Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-nl
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1406
- F1: 0.9110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2478 | 1.0 | 1250 | 0.1519 | 0.8691 |
| 0.1247 | 2.0 | 2500 | 0.1346 | 0.8930 |
| 0.0817 | 3.0 | 3750 | 0.1291 | 0.9064 |
| 0.049 | 4.0 | 5000 | 0.1406 | 0.9110 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
CyberHarem/flan_majonotabitabi
|
CyberHarem
| 2024-01-06T18:47:11Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/flan_majonotabitabi",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T18:39:52Z |
---
license: mit
datasets:
- CyberHarem/flan_majonotabitabi
pipeline_tag: text-to-image
tags:
- art
---
# Lora of flan_majonotabitabi
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/flan_majonotabitabi.pt` as the embedding and `4420/flan_majonotabitabi.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.923. The trigger words are:
1. `flan_majonotabitabi`
2. `black_hair, long_hair, hair_over_one_eye, mole, mole_under_eye, ribbon, hat, witch_hat, smile, closed_eyes, blue_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.877 | [Download](5100/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.911 | [Download](4760/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.923** | [**Download**](4420/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.744 | [Download](4080/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.902 | [Download](3740/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.901 | [Download](3400/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.899 | [Download](3060/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.712 | [Download](2720/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.907 | [Download](2380/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.842 | [Download](2040/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.726 | [Download](1700/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.672 | [Download](1360/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.574 | [Download](1020/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.454 | [Download](680/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.860 | [Download](340/flan_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Panchovix/goliath-120b-exl2-rpcal
|
Panchovix
| 2024-01-06T18:41:33Z | 7 | 32 | null |
[
"license:llama2",
"region:us"
] | null | 2023-11-06T17:37:40Z |
---
license: llama2
---
EXL2 quants of alpindale/goliath-120b (https://huggingface.co/alpindale/goliath-120b), to be used on exllamav2.
Update 06/01/2024: Updated with new quant method after some time, thanks for the measurement [here](https://github.com/turboderp/exllamav2/files/13846439/goliath-120b-rpcal-measurement.json)
Calibration dataset is a cleaned, fixed pippa RP dataset, which does affect the results (in favor) for RP usage. You can find the calibration dataset [here.](https://huggingface.co/datasets/royallab/PIPPA-cleaned)
I've added a measurement.json file on the main branch if you want to do your own quants.
[4.85bpw](https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal/tree/4.85bpw)
[4.5bpw](https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal/tree/4.5bpw)
[3bpw](https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal/tree/3bpw)
# Original model card
# Goliath 120B
An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix):
- [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp)
- [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite)
- [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM)
- [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI)
# Prompting Format
Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
# Merge process
The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B).
The layer ranges used are as follows:
```yaml
- range 0, 16
Xwin
- range 8, 24
Euryale
- range 17, 32
Xwin
- range 25, 40
Euryale
- range 33, 48
Xwin
- range 41, 56
Euryale
- range 49, 64
Xwin
- range 57, 72
Euryale
- range 65, 80
Xwin
```
# Screenshots

# Benchmarks
Coming soon.
# Acknowledgements
Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
|
Rixalizal12/Oboy
|
Rixalizal12
| 2024-01-06T18:41:17Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-06T17:24:41Z |
---
license: creativeml-openrail-m
---
|
mus-shd/ppo-SnowballTarget
|
mus-shd
| 2024-01-06T18:40:32Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-06T18:38:25Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mus-shd/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ladoza03/xlm-roberta-base-finetuned-panx-fr
|
ladoza03
| 2024-01-06T18:35:53Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-06T18:24:20Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2291
- F1: 0.8936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3452 | 1.0 | 1250 | 0.2279 | 0.8510 |
| 0.1973 | 2.0 | 2500 | 0.2132 | 0.8666 |
| 0.1288 | 3.0 | 3750 | 0.2083 | 0.8860 |
| 0.0861 | 4.0 | 5000 | 0.2291 | 0.8936 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
ladoza03/xlm-roberta-base-finetuned-panx-vi
|
ladoza03
| 2024-01-06T18:24:02Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-06T18:13:45Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-vi
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1951
- F1: 0.9134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.383 | 1.0 | 1250 | 0.2193 | 0.8558 |
| 0.2044 | 2.0 | 2500 | 0.2182 | 0.8780 |
| 0.1361 | 3.0 | 3750 | 0.1889 | 0.8980 |
| 0.0888 | 4.0 | 5000 | 0.1951 | 0.9134 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Tech-Meld/HS-Instructed
|
Tech-Meld
| 2024-01-06T18:23:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-06T18:23:25Z |
---
license: creativeml-openrail-m
---
|
acrastt/lr-test
|
acrastt
| 2024-01-06T18:23:10Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-12-28T04:31:47Z |
---
license: mit
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
---
|
kdave/FineTuned_Finbert
|
kdave
| 2024-01-06T18:15:57Z | 0 | 7 | null |
[
"financial-sentiment-analysis",
"Transformers",
" sentiment-analysis",
"TensorFlow",
"PyTorch",
"Text Classification",
"bert",
"Inference Endpoints",
"en",
"region:us"
] | null | 2024-01-06T07:24:29Z |
---
language:
- en
tags:
- financial-sentiment-analysis
- Transformers
- ' sentiment-analysis'
- TensorFlow
- PyTorch
- Text Classification
- bert
- Inference Endpoints
---
# Model Card for FineTuned finbert model
<!-- Provide a quick summary of what the model is/does. -->
Our fine-tuned FinBERT model is a powerful tool designed for sentiment analysis specifically tailored to Indian stock market news. Leveraging the foundation of FinBERT, a BERT model pre-trained on extensive financial communication text (https://huggingface.co/yiyanghkust/finbert-tone) , our model focuses on enhancing sentiment analysis within the context of the Indian financial landscape.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Khushi Dave
- **Model type:** BERT (Bidirectional Encoder Representations from Transformers)
- **Language:** English
- **Finetuned from model:** yiyanghkust/finbert-tone
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/kdave/FineTuned_Finbert
## Uses
The Fine-Tuned FinBERT model is designed for sentiment analysis in Indian stock market news. It's beneficial for researchers, financial analysts, and developers aiming to enhance sentiment assessments. Users include those making investment decisions and the academic community. Responsible usage and acknowledgment of the original FinBERT model are encouraged.
In essence, it's a valuable tool for understanding market sentiment in the Indian context, catering to professionals and individuals engaged in financial analysis and research.
### Direct Use
```python
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
# Load the fine-tuned FinBERT model and tokenizer
finbert = BertForSequenceClassification.from_pretrained('kdave/FineTuned_Finbert', num_labels=3)
tokenizer = BertTokenizer.from_pretrained('kdave/FineTuned_Finbert')
# Create a sentiment-analysis pipeline
nlp_pipeline = pipeline("sentiment-analysis", model = finbert, tokenizer = tokenizer)
# Example sentences related to Indian stock market news
sentences = [
"The Indian stock market experienced a surge in trading activity.",
"Investors are optimistic about the future of Indian financial markets.",
"Concerns about economic uncertainties are affecting stock prices in India.",
"Earnings reports from Indian companies show a positive trend."
]
# Perform sentiment analysis using the fine-tuned FinBERT model for Indian stock market news
results = nlp_pipeline(sentences)
print(results)
```
### Out-of-Scope Use
1. Misuse:
Deliberate Misinformation: The model may be misused if fed with intentionally crafted misinformation to manipulate sentiment analysis results. Users should ensure the input data is authentic and unbiased.
2. Malicious Use:
Market Manipulation Attempts: Any attempt to use the model to propagate false sentiment for the purpose of market manipulation is strictly unethical and against the intended use of the model.
3. Limitations:
Non-Financial Texts: The model is fine-tuned specifically for Indian stock market news. It may not perform optimally when applied to non-financial texts or unrelated domains.
Extreme Outliers: Unusual or extreme cases in sentiment expression might pose challenges. The model's performance might be less reliable for exceptionally rare or unconventional sentiment expressions.
Non-Standard Language: The model's training data primarily comprises standard financial language. It may not perform as well when faced with non-standard language, colloquialisms, or slang.
## Bias, Risks, and Limitations
#### Technical Limitations:
1. **Domain Specificity:**
- The model is finely tuned for Indian stock market news, limiting its effectiveness when applied to texts outside this domain.
2. **Data Representativeness:**
- The model's performance is contingent on the representativeness of the training data. It may not capture nuances in sentiment expressions not well-represented in the training corpus.
3. **Language Complexity:**
- Non-standard language, colloquialisms, or slang may pose challenges, as the model is primarily trained on standard financial language.
#### Sociotechnical Considerations:
1. **Bias in Training Data:**
- The model inherits biases present in the training data. Efforts have been made to curate diverse data, but biases, if present, may affect the model's outputs.
2. **Ethical Usage:**
- Users are urged to employ the model ethically, avoiding misuse or malicious applications that may impact market sentiment or manipulate results.
#### Risks:
1. **Decisions Based Solely on Model Output:**
- Relying solely on the model for decision-making is discouraged. Users should supplement model insights with additional research and expert judgment.
2. **Market Dynamics:**
- The model might not account for sudden and unprecedented market events, and decisions should consider real-time market dynamics.
### Responsible Model Usage:
Understanding these limitations, users are advised to interpret model outputs judiciously, considering the context and potential biases. Transparent communication and awareness of both technical and sociotechnical constraints are essential for responsible model usage. While the model is a valuable tool, it is not infallible, and decision-makers should exercise prudence and diligence.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
Step 1: Install Required Libraries
Ensure you have the necessary libraries installed by running:
```python
pip install transformers
```
Step 2: Load the Fine-Tuned Model
Use the following Python code to load the model and tokenizer:
```python
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
# Load the fine-tuned FinBERT model and tokenizer
finbert = BertForSequenceClassification.from_pretrained('kdave/FineTuned_Finbert', num_labels=3)
tokenizer = BertTokenizer.from_pretrained('kdave/FineTuned_Finbert')
# Create a sentiment-analysis pipeline
nlp_pipeline = pipeline("sentiment-analysis", model=finbert, tokenizer=tokenizer)
```
Step 3: Perform Sentiment Analysis
Now, you're ready to analyze sentiment! Provide the model with sentences related to Indian stock market news:
```python
# Example sentences related to Indian stock market news
sentences = [
"The Indian stock market experienced a surge in trading activity.",
"Investors are optimistic about the future of Indian financial markets.",
"Concerns about economic uncertainties are affecting stock prices in India.",
"Earnings reports from Indian companies show a positive trend."
]
# Perform sentiment analysis using the fine-tuned FinBERT model
results = nlp_pipeline(sentences)
print(results)
```
Run the code, and voilà! You'll receive sentiment insights for each sentence.
Step 4: Incorporate into Your Workflow
Integrate this model seamlessly into your financial NLP research or analysis workflows to elevate the accuracy and depth of sentiment assessments related to the Indian stock market.
Now, you're all set to harness the power of the Fine-Tuned FinBERT model. Happy analyzing! 📈🚀
## Training Details
**Dataset Information:**
The Fine-Tuned FinBERT model was trained on a carefully curated dataset consisting of Indian financial news articles with summaries. Here's a brief overview of the dataset and its preparation:
1. **Data Source:**
- The dataset encompasses a wide array of Indian financial news articles, ensuring a diverse and representative sample of content related to the stock market.
2. **Text Summarization:**
- The T5-base model from Hugging Face was employed for text summarization. This step aimed to distill the essential information from each article, providing concise summaries for training the model.
3. **Sentiment Labeling:**
- Sentiment labels for the curated dataset were derived through the GPT add-on for Google Sheets. This process involved annotating the articles with positive, negative, or neutral sentiments, enhancing the model's ability to discern nuanced expressions.
4. **Contextual Richness:**
- The dataset was designed to be contextually rich, exposing the model to a spectrum of sentiment expressions within the Indian stock market landscape. This diversity ensures the model's adaptability to varied scenarios.
**Dataset Card:**
For more detailed information on the dataset, including statistics, features, and documentation related to data pre-processing, please refer to the associated [Dataset Card](https://huggingface.co/datasets/kdave/Indian_Financial_News).
This meticulous curation and diverse data incorporation contribute to the model's proficiency in capturing nuanced sentiment expressions relevant to the Indian stock market.
|
rohansolo/BB-L-01-7B-mlx
|
rohansolo
| 2024-01-06T18:13:38Z | 1 | 0 |
mlx
|
[
"mlx",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"hi",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:rohansolo/BB_HindiHinglish",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-06T18:03:24Z |
---
language:
- hi
license: cc-by-nc-4.0
tags:
- alignment-handbook
- generated_from_trainer
- mlx
datasets:
- HuggingFaceH4/ultrachat_200k
- rohansolo/BB_HindiHinglish
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: BB-L-01-7B
results: []
---
# BB-L-01-7B-mlx
This model was converted to MLX format from [`rohansolo/BB-L-01-7B`]().
Refer to the [original model card](https://huggingface.co/rohansolo/BB-L-01-7B) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model rohansolo/BB-L-01-7B-mlx --prompt "<|system|>
You are a helpful AI assistant</s>
<|user|>
एक पाइथन स्क्रिप्ट लिखो बबल सॉर्ट के लिए</s>"
```
|
ladoza03/xlm-roberta-base-finetuned-panx-en
|
ladoza03
| 2024-01-06T18:13:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-06T14:59:16Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2771
- F1: 0.8326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3975 | 1.0 | 1250 | 0.2850 | 0.7912 |
| 0.2466 | 2.0 | 2500 | 0.2563 | 0.8094 |
| 0.178 | 3.0 | 3750 | 0.2654 | 0.8260 |
| 0.1273 | 4.0 | 5000 | 0.2771 | 0.8326 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
shnl/llama2-13B-QAG-10000
|
shnl
| 2024-01-06T18:12:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-01-06T18:03:30Z |
---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
CyberHarem/saya_majonotabitabi
|
CyberHarem
| 2024-01-06T18:09:04Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/saya_majonotabitabi",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T18:01:37Z |
---
license: mit
datasets:
- CyberHarem/saya_majonotabitabi
pipeline_tag: text-to-image
tags:
- art
---
# Lora of saya_majonotabitabi
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3400, you need to download `3400/saya_majonotabitabi.pt` as the embedding and `3400/saya_majonotabitabi.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3400**, with the score of 0.904. The trigger words are:
1. `saya_majonotabitabi`
2. `black_hair, short_hair, blue_eyes, bangs, blush`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.833 | [Download](5100/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.826 | [Download](4760/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.734 | [Download](4420/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.752 | [Download](4080/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.761 | [Download](3740/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| **3400** | **0.904** | [**Download**](3400/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.862 | [Download](3060/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.845 | [Download](2720/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.853 | [Download](2380/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.753 | [Download](2040/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.741 | [Download](1700/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.741 | [Download](1360/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.763 | [Download](1020/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.809 | [Download](680/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.696 | [Download](340/saya_majonotabitabi.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ludis/tsukasa-13b-qlora-limarp-gptq
|
ludis
| 2024-01-06T18:00:41Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:PygmalionAI/PIPPA",
"dataset:ludis/geepeetee4",
"dataset:lemonilia/LimaRP",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-10T10:35:11Z |
---
datasets:
- PygmalionAI/PIPPA
- ludis/geepeetee4
- lemonilia/LimaRP
---
## GPTQ
gptq quants for ludis/tsukasa-13b-qlora-limarp download the original model except for the .bin files (or download everything and delete the .bin files) then move the contents from whichever quants folder you want to use into the original model folder and run with autogptq
## Prompting
https://rentry.org/v43eo - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (llama-2-13b-hf)
then tuned on pippa for 1 epoch
then tuned on ludis/geepeetee4 commit 58aabc8 for 1 epoch
then tuned on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-09-08 for 2 epochs
|
ludis/tsukasa-13b-qlora-limarp-gguf
|
ludis
| 2024-01-06T18:00:30Z | 174 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"dataset:PygmalionAI/PIPPA",
"dataset:ludis/geepeetee4",
"dataset:lemonilia/LimaRP",
"endpoints_compatible",
"region:us"
] | null | 2023-09-11T06:06:27Z |
---
datasets:
- PygmalionAI/PIPPA
- ludis/geepeetee4
- lemonilia/LimaRP
---
## GGUF
gguf quants for ludis/tsukasa-13b-qlora-limarp
## Prompting
https://rentry.org/tsukasa13b - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (mistral-0.1-7b)
[axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) was used for training
on a 4x nvidia a40 gpu cluster.
the a40 GPU cluster has been graciously provided by [Arc Compute](https://www.arccompute.io/).
rank 8 lora tune of mistralai/Mistral-7B-v0.1, first tuned on koishi commit 6e675d1 for one epoch then on limarp (without ponyville, lolicit, all the fallen, and eka's portal subsets) Version 2023-09-30 for 2 epochs in metharme format
|
ludis/tsukasa-13b-qlora-limarp
|
ludis
| 2024-01-06T18:00:20Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:PygmalionAI/PIPPA",
"dataset:ludis/geepeetee4",
"dataset:lemonilia/LimaRP",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-08T04:38:37Z |
---
datasets:
- PygmalionAI/PIPPA
- ludis/geepeetee4
- lemonilia/LimaRP
---
## Prompting
https://rentry.org/tsukasa13b - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (llama-2-13b-hf)
tuned on koishi dataset (commit c83d922) for 1 epoch
then tuned on pippa dataset (commit 6412b0c) for 1 epoch
then tuned on geepeetee4 dataset (commit c83d922) for 1 epoch
then tuned on limarp (without ponyville, lolicit, and all the fallen subsets. Version 2023-09-14) for 2 epochs
|
ludis/tsukasa-limarp-7b-gguf
|
ludis
| 2024-01-06T17:57:04Z | 42 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"dataset:PygmalionAI/PIPPA",
"dataset:lemonilia/LimaRP",
"endpoints_compatible",
"region:us"
] | null | 2023-09-04T03:09:31Z |
---
datasets:
- PygmalionAI/PIPPA
- lemonilia/LimaRP
---
## GGUF
gguf quants for ludis/tsukasa-limarp-7b
## Prompting
https://rentry.org/v43eo - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (llama-2-7b-hf)
tuned on commit de693ac of the koishi dataset for 1 epoch as apart of ludis/tsukasa-7b
then tuned on commit 36fc235 of pippa metharme for 1 epoch as apart of ludis/tsukasa-7b
then tuned on Version 2023-09-03 of LimaRP (without ponyville, lolicit, all the fallen, and eka's portal subsets) for 2 epochs
|
ludis/tsukasa-limarp-7b
|
ludis
| 2024-01-06T17:56:49Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:PygmalionAI/PIPPA",
"dataset:lemonilia/LimaRP",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-03T17:43:29Z |
---
datasets:
- PygmalionAI/PIPPA
- lemonilia/LimaRP
---
## Prompting
https://rentry.org/v43eo - reccomended prompts and gen settings
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
## Training
base model (llama-2-7b-hf)
tuned on commit de693ac of the koishi dataset for 1 epoch as apart of ludis/tsukasa-7b
then tuned on commit 36fc235 of pippa metharme for 1 epoch as apart of ludis/tsukasa-7b
then tuned on Version 2023-09-03 of LimaRP (without ponyville, lolicit, all the fallen, and eka's portal subsets) for 2 epochs
|
Serpol1999/serpol1999-test
|
Serpol1999
| 2024-01-06T17:56:28Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-06T17:49:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Serpol1999/test Dreambooth model trained by Serpol1999 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
parsak/mistralcode-7b-instruct-lora-adapters
|
parsak
| 2024-01-06T17:53:25Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-06T17:12:37Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
aslessor/bert-large-uncased-whole-word-masking-finetuned-squad
|
aslessor
| 2024-01-06T17:51:56Z | 10 | 0 |
generic
|
[
"generic",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"endpoints-template",
"other",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
other
| 2024-01-01T20:07:26Z |
---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
tags:
- endpoints-template
library_name: generic
model-index:
- name: bert-large-uncased-whole-word-masking-finetuned-squad
results: []
pipeline_tag: other
---
# DEPLOYED @: https://ciy95hpzki22rqvf.us-east-1.aws.endpoints.huggingface.cloud
# BERT large model (uncased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
```
python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--dataset_name squad \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./examples/models/wwm_uncased_finetuned_squad/ \
--per_device_eval_batch_size=3 \
--per_device_train_batch_size=3 \
```
## Evaluation results
The results obtained are the following:
```
f1 = 93.15
exact_match = 86.91
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
# Error Log
```json
{'error': 'Body needs to provide a inputs key, recieved: b\'{"question":"What is my name?","context":"My name is Clara and I live in Berkeley."}\''}
```
```json
{'error': 'Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)'}
```
|
junzhangli/my_awesome_billsum_model
|
junzhangli
| 2024-01-06T17:49:33Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"base_model:allenai/led-base-16384",
"base_model:finetune:allenai/led-base-16384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-06T17:21:24Z |
---
license: apache-2.0
base_model: allenai/led-base-16384
tags:
- generated_from_trainer
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7927
- eval_rouge1: 0.1895
- eval_rouge2: 0.0969
- eval_rougeL: 0.1641
- eval_rougeLsum: 0.1692
- eval_gen_len: 20.0
- eval_runtime: 47.5253
- eval_samples_per_second: 5.218
- eval_steps_per_second: 1.052
- epoch: 2.0
- step: 396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Sunirmala/Medic-Phi2
|
Sunirmala
| 2024-01-06T17:45:37Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"phi-msft",
"custom_code",
"license:mit",
"region:us"
] | null | 2024-01-06T15:24:23Z |
---
license: mit
library_name: adapter-transformers
---
|
bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2
|
bartowski
| 2024-01-06T17:35:14Z | 0 | 0 | null |
[
"text-generation",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
text-generation
| 2024-01-06T15:14:20Z |
---
license: cc-by-nc-nd-4.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Mistral-7B-Instruct-v0.2-code-ft
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/Nondzu/Mistral-7B-Instruct-v0.2-code-ft
<a href="https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2/tree/3_5">3.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Mistral-7B-Instruct-v0.2-code-ft-exl2`:
```shell
mkdir Mistral-7B-Instruct-v0.2-code-ft-exl2
huggingface-cli download bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2 --local-dir Mistral-7B-Instruct-v0.2-code-ft-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Mistral-7B-Instruct-v0.2-code-ft-exl2
huggingface-cli download bartowski/Mistral-7B-Instruct-v0.2-code-ft-exl2 --revision 4_0 --local-dir Mistral-7B-Instruct-v0.2-code-ft-exl2 --local-dir-use-symlinks False
```
|
louistichelman/BART-finetuned-on-training-without-knowledge
|
louistichelman
| 2024-01-06T17:35:11Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-06T17:13:09Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: BART-finetuned-on-training-without-knowledge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-finetuned-on-training-without-knowledge
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2844
- Bleu: 2.829
- Gen Len: 19.3589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.6072 | 1.0 | 1679 | 2.3674 | 2.3478 | 19.8107 |
| 2.2177 | 2.0 | 3358 | 2.2844 | 2.829 | 19.3589 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.14.1
|
alirzb/S5_M1_fold3_ViT_42618589
|
alirzb
| 2024-01-06T17:34:46Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T16:20:06Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S5_M1_fold3_ViT_42618589
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold3_ViT_42618589
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Accuracy: 0.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0026 | 1.0 | 368 | 0.0069 | 0.9976 |
| 0.0052 | 2.0 | 737 | 0.0094 | 0.9984 |
| 0.0006 | 3.0 | 1105 | 0.0086 | 0.9984 |
| 0.0 | 4.0 | 1474 | 0.0068 | 0.9984 |
| 0.0 | 4.99 | 1840 | 0.0068 | 0.9984 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
CyberHarem/elaina_majonotabitabi
|
CyberHarem
| 2024-01-06T17:30:26Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/elaina_majonotabitabi",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T17:13:45Z |
---
license: mit
datasets:
- CyberHarem/elaina_majonotabitabi
pipeline_tag: text-to-image
tags:
- art
---
# Lora of elaina_majonotabitabi
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7140, you need to download `7140/elaina_majonotabitabi.pt` as the embedding and `7140/elaina_majonotabitabi.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7140**, with the score of 0.953. The trigger words are:
1. `elaina_majonotabitabi`
2. `long_hair, bangs, hair_between_eyes, blue_eyes, closed_mouth, grey_hair, bow, white_hair, hat, witch_hat, black_headwear, purple_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | pattern_22 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------|
| 15300 | 0.943 | [Download](15300/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](15300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](15300/previews/nude.png) | [<NSFW, click to see>](15300/previews/nude2.png) |  |  |
| 14280 | 0.949 | [Download](14280/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](14280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](14280/previews/nude.png) | [<NSFW, click to see>](14280/previews/nude2.png) |  |  |
| 13260 | 0.953 | [Download](13260/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](13260/previews/bondage.png) |  |  |  | [<NSFW, click to see>](13260/previews/nude.png) | [<NSFW, click to see>](13260/previews/nude2.png) |  |  |
| 12240 | 0.952 | [Download](12240/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](12240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](12240/previews/nude.png) | [<NSFW, click to see>](12240/previews/nude2.png) |  |  |
| 11220 | 0.945 | [Download](11220/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](11220/previews/bondage.png) |  |  |  | [<NSFW, click to see>](11220/previews/nude.png) | [<NSFW, click to see>](11220/previews/nude2.png) |  |  |
| 10200 | 0.944 | [Download](10200/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10200/previews/nude.png) | [<NSFW, click to see>](10200/previews/nude2.png) |  |  |
| 9180 | 0.949 | [Download](9180/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9180/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9180/previews/nude.png) | [<NSFW, click to see>](9180/previews/nude2.png) |  |  |
| 8160 | 0.949 | [Download](8160/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8160/previews/nude.png) | [<NSFW, click to see>](8160/previews/nude2.png) |  |  |
| **7140** | **0.953** | [**Download**](7140/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7140/previews/nude.png) | [<NSFW, click to see>](7140/previews/nude2.png) |  |  |
| 6120 | 0.952 | [Download](6120/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6120/previews/nude.png) | [<NSFW, click to see>](6120/previews/nude2.png) |  |  |
| 5100 | 0.930 | [Download](5100/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4080 | 0.951 | [Download](4080/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3060 | 0.952 | [Download](3060/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2040 | 0.951 | [Download](2040/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1020 | 0.941 | [Download](1020/elaina_majonotabitabi.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
|
kikim6114/distilbert-base-uncased-finetuned-emotion
|
kikim6114
| 2024-01-06T17:07:12Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T15:17:27Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9248990116000972
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2129
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8234 | 1.0 | 250 | 0.2981 | 0.9135 | 0.9129 |
| 0.2432 | 2.0 | 500 | 0.2129 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.0
- Datasets 2.16.1
- Tokenizers 0.13.0.dev0
|
TinyPixel/qwen-1.8B-OrcaMini
|
TinyPixel
| 2024-01-06T17:01:31Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"dataset:TinyPixel/orca-bad",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-06T15:59:48Z |
---
datasets:
- TinyPixel/orca-bad
---
## Usage
```python
!pip install -q -U trl transformers accelerate git+https://github.com/huggingface/peft.git
!pip install -q datasets bitsandbytes einops wandb sentencepiece transformers_stream_generator tiktoken
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("TinyPixel/qwen-1.8B-OrcaMini", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("TinyPixel/qwen-1.8B-OrcaMini", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
device = "cuda:0"
text = '''SYSTEM:
USER: what is the difference between a dog and a cat on a biological level?
ASSISTANT:'''
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(**inputs,
max_new_tokens=512,
do_sample=True,
top_p=0.95,
temperature=0.7,
top_k=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
|
detakarang/Phixphi-4x2.7b
|
detakarang
| 2024-01-06T16:56:57Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mrm8488/phi-2-coder",
"microsoft/phi-2",
"Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T16:55:21Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mrm8488/phi-2-coder
- microsoft/phi-2
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
---
# Phixphi-4x2.7b
This model is a merge of the following models made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)
* [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
* [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)
## 🧩 Configuration
```yaml
models:
- model: mrm8488/phi-2-coder
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: microsoft/phi-2
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: dare_ties
base_model: cognitivecomputations/dolphin-2_6-phi-2
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "detakarang/Phixphi-4x2.7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mtc/mistralai-Mistral-7B-v0.1-pubmed-summarization-5000-v2-qlora-4bit
|
mtc
| 2024-01-06T16:47:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-06T16:46:38Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
roktimsardar123/client001
|
roktimsardar123
| 2024-01-06T16:45:19Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-06T15:29:37Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a apxu woman
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
davidadamczyk/biden_3_LORA_SEQ_2_SEQ_LM
|
davidadamczyk
| 2024-01-06T16:37:16Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-xxl",
"base_model:adapter:google/flan-t5-xxl",
"region:us"
] | null | 2024-01-06T16:37:13Z |
---
library_name: peft
base_model: google/flan-t5-xxl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
MaVier19/zero-shot_text_classification_pre_trained
|
MaVier19
| 2024-01-06T16:30:25Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"base_model:finetune:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-06T16:22:05Z |
---
license: mit
base_model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: zero-shot_text_classification_pre_trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zero-shot_text_classification_pre_trained
This model is a fine-tuned version of [MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8939
- Accuracy: 0.695
- F1: 0.6917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.7346 | 1.0 | 750 | 0.8939 | 0.695 | 0.6917 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Johnlhugface/LunarLander-v2
|
Johnlhugface
| 2024-01-06T16:25:32Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-06T16:25:26Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -258.68 +/- 162.92
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'exp_name': 'test1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 1
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'f': None
'repo_id': 'Johnlhugface/LunarLander-v2'
'batch_size': 128
'minibatch_size': 32}
```
|
marcogfedozzi/sac-PandaReachDense-v3
|
marcogfedozzi
| 2024-01-06T16:20:48Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-06T16:17:36Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: sac
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -11.68 +/- 8.13
name: mean_reward
verified: false
---
# **sac** Agent playing **PandaReachDense-v3**
This is a trained model of a **sac** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alirzb/S5_M1_fold1_ViT_42618571
|
alirzb
| 2024-01-06T16:19:09Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T14:49:14Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S5_M1_fold1_ViT_42618571
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S5_M1_fold1_ViT_42618571
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0293 | 1.0 | 368 | 0.0035 | 0.9992 |
| 0.0006 | 2.0 | 737 | 0.0031 | 0.9984 |
| 0.0001 | 3.0 | 1105 | 0.0017 | 0.9992 |
| 0.0 | 4.0 | 1474 | 0.0016 | 0.9992 |
| 0.0 | 4.99 | 1840 | 0.0013 | 0.9992 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
alirzb/S2_M1_R3_ViT_42618549
|
alirzb
| 2024-01-06T16:16:03Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T14:49:36Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S2_M1_R3_ViT_42618549
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S2_M1_R3_ViT_42618549
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0171 | 1.0 | 307 | 0.0156 | 0.9952 |
| 0.0097 | 2.0 | 614 | 0.0005 | 1.0 |
| 0.0045 | 3.0 | 921 | 0.0021 | 0.9990 |
| 0.0 | 4.0 | 1229 | 0.0001 | 1.0 |
| 0.0001 | 5.0 | 1535 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
LoneStriker/Sensualize-Solar-10.7B-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-06T16:10:32Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-v1.0",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T16:06:06Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- upstage/SOLAR-10.7B-v1.0
---
A finetune of Base Solar. Took 12 Hours or so on 2x RTX 6000 ADAs, this is an 8-bit LoRA.
This is meant to be a verbose, smart ERP model.
Experimental.
***
### Prompt Format: Alpaca
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
|
Tobius/runyakore
|
Tobius
| 2024-01-06T16:06:53Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"nyn",
"dataset:tericlabs",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-03T10:42:49Z |
---
language:
- nyn
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- tericlabs
metrics:
- wer
model-index:
- name: Whisper Small Runyankore
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Yogera data
type: tericlabs
config: nyn
split: test
args: nyn
metrics:
- name: Wer
type: wer
value: 96.9176052163604
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Runyankore
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Yogera data dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6289
- Wer: 96.9176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9225 | 0.5 | 100 | 2.3983 | 126.6153 |
| 1.8681 | 1.25 | 200 | 1.6289 | 96.9176 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
dataautogpt3/synthetic-animeV1.1
|
dataautogpt3
| 2024-01-06T16:04:54Z | 46 | 5 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:dataautogpt3/OpenDalleV1.1",
"base_model:adapter:dataautogpt3/OpenDalleV1.1",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-06T15:56:58Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: in the style of <s0><s1>
output:
url: example_1.png
- text: in the style of <s0><s1>
output:
url: example_2.png
- text: in the style of <s0><s1>, manga in the early 1990s,surreal,digitally rendered with glitches appearing throughout,depicted in matte colors,created using a digital medium. illustrated by Junji Ito,Yoshiyuki Sadamoto, and Rumiko Takahashi, a green cartoon frog, pepe,
output:
url: example_3.png
- text: in the style of <s0><s1>
output:
url: example_4.png
- text: in the style of <s0><s1>, manga in the early 1990s,surreal,digitally rendered with glitches appearing throughout,depicted in matte colors,created using a digital medium. illustrated by Junji Ito,Yoshiyuki Sadamoto, and Rumiko Takahashi,
output:
url: example_5.png
- text: in the style of <s0><s1>, manga in the early 1990s,surreal
base_model: dataautogpt3/OpenDalleV1.1
instance_prompt: in the style of <s0><s1>
license: openrail++
---
# SDXL LoRA - dataautogpt3/synthetic-animev1-1
<Gallery />
## Model description
### These are dataautogpt3/synthetic-animev1-1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`synthetic-animev1-1.safetensors` here 💾](/dataautogpt3/synthetic-animev1-1/blob/main/synthetic-animev1-1.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:synthetic-animev1-1:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`synthetic-animev1-1_emb.safetensors` here 💾](/dataautogpt3/synthetic-animev1-1/blob/main/synthetic-animev1-1_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `synthetic-animev1-1_emb` to your prompt. For example, `in the style of synthetic-animev1-1_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('dataautogpt3/synthetic-animev1-1', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='dataautogpt3/synthetic-animev1-1', filename='synthetic-animev1-1_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('in the style of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/dataautogpt3/synthetic-animev1-1/tree/main).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
STomoya/efficientnetv2_m.st_safebooru_1k
|
STomoya
| 2024-01-06T16:03:55Z | 16 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-01-06T16:03:12Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for efficientnetv2_m.st_safebooru_1k
## Model Details
- **metrics:**
|Precision|Recall|F1-score|
|-|-|-|
|0.7781779941801926|0.5143605668033984|0.5968171325025939|
|
LoneStriker/Sensualize-Solar-10.7B-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-06T16:02:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-v1.0",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T15:55:35Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- upstage/SOLAR-10.7B-v1.0
---
A finetune of Base Solar. Took 12 Hours or so on 2x RTX 6000 ADAs, this is an 8-bit LoRA.
This is meant to be a verbose, smart ERP model.
Experimental.
***
### Prompt Format: Alpaca
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
|
marcogfedozzi/sac-PandaPickAndPlaceDense-v3
|
marcogfedozzi
| 2024-01-06T15:58:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlaceDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-06T15:55:07Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlaceDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: sac
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlaceDense-v3
type: PandaPickAndPlaceDense-v3
metrics:
- type: mean_reward
value: -10.52 +/- 3.95
name: mean_reward
verified: false
---
# **sac** Agent playing **PandaPickAndPlaceDense-v3**
This is a trained model of a **sac** agent playing **PandaPickAndPlaceDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/Sensualize-Solar-10.7B-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-06T15:57:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-v1.0",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T15:55:36Z |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- upstage/SOLAR-10.7B-v1.0
---
A finetune of Base Solar. Took 12 Hours or so on 2x RTX 6000 ADAs, this is an 8-bit LoRA.
This is meant to be a verbose, smart ERP model.
Experimental.
***
### Prompt Format: Alpaca
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
|
CyberHarem/koyori_tanemura_watashinitenshigamaiorita
|
CyberHarem
| 2024-01-06T15:43:47Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/koyori_tanemura_watashinitenshigamaiorita",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-06T15:28:51Z |
---
license: mit
datasets:
- CyberHarem/koyori_tanemura_watashinitenshigamaiorita
pipeline_tag: text-to-image
tags:
- art
---
# Lora of koyori_tanemura_watashinitenshigamaiorita
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5100, you need to download `5100/koyori_tanemura_watashinitenshigamaiorita.pt` as the embedding and `5100/koyori_tanemura_watashinitenshigamaiorita.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5100**, with the score of 0.985. The trigger words are:
1. `koyori_tanemura_watashinitenshigamaiorita`
2. `bangs, blush, long_hair, hair_ornament, x_hair_ornament, twintails, purple_eyes, red_hair, closed_mouth, smile, brown_hair, indoors, very_long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **5100** | **0.985** | [**Download**](5100/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.901 | [Download](4760/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.981 | [Download](4420/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.897 | [Download](4080/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.984 | [Download](3740/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.941 | [Download](3400/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.970 | [Download](3060/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.978 | [Download](2720/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.967 | [Download](2380/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.976 | [Download](2040/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.960 | [Download](1700/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.875 | [Download](1360/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.966 | [Download](1020/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.905 | [Download](680/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.853 | [Download](340/koyori_tanemura_watashinitenshigamaiorita.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
alirzb/S2_M1_R2_ViT_42618530
|
alirzb
| 2024-01-06T15:41:52Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-06T14:38:17Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: S2_M1_R2_ViT_42618530
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S2_M1_R2_ViT_42618530
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
- Accuracy: 0.9987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0088 | 1.0 | 237 | 0.0385 | 0.9887 |
| 0.0067 | 2.0 | 474 | 0.0155 | 0.9962 |
| 0.0015 | 3.0 | 711 | 0.0038 | 0.9987 |
| 0.0001 | 4.0 | 948 | 0.0011 | 0.9987 |
| 0.0001 | 5.0 | 1185 | 0.0018 | 0.9987 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Deathsquad10/TinyLlama-repeat
|
Deathsquad10
| 2024-01-06T15:39:37Z | 1,527 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T09:00:32Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- text: "<|system|>\nYou are a chatbot who can help code!</s>\n<|user|>\nWrite out the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>\n<|assistant|>\n"
---
<div align="center">
# TinyLlama-1.1B ---My personal Test update Version 2
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 0|acc |0.3046|± |0.0134|
| | |none | 0|acc_norm|0.3234|± |0.0137|
|arc_easy |Yaml |none | 0|acc |0.6077|± |0.0100|
| | |none | 0|acc_norm|0.5307|± |0.0102|
|boolq |Yaml |none | 0|acc |0.5948|± |0.0086|
|hellaswag |Yaml |none | 0|acc |0.4601|± |0.0050|
| | |none | 0|acc_norm|0.5987|± |0.0049|
|openbookqa |Yaml |none | 0|acc |0.2420|± |0.0192|
| | |none | 0|acc_norm|0.3500|± |0.0214|
|piqa |Yaml |none | 0|acc |0.7410|± |0.0102|
| | |none | 0|acc_norm|0.7405|± |0.0102|
|winogrande |Yaml |none | 0|acc |0.6093|± |0.0137|
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
|
EvanD/xlm-roberta-base-danish-ner-daner
|
EvanD
| 2024-01-06T15:38:33Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"named-entity-recognition",
"sequence-tagger-model",
"da",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T13:30:57Z |
---
pipeline_tag: token-classification
tags:
- named-entity-recognition
- sequence-tagger-model
widget:
- text: Mit navn er Amadeus Wolfgang, og jeg bor i Berlin
inference:
parameters:
aggregation_strategy: simple
grouped_entities: true
language:
- da
---
xlm-roberta model trained on [DaNe](https://aclanthology.org/2020.lrec-1.565/), performing 97.1 f1-Macro on test set.
| Test metric | Results |
|-------------------------|---------------------------|
| test_f1_mac_dane_ner | 0.9713183641433716 |
| test_loss_dane_ner | 0.11384682357311249 |
| test_prec_mac_dane_ner | 0.8712055087089539 |
| test_rec_mac_dane_ner | 0.8684446811676025 |
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("EvanD/xlm-roberta-base-danish-ner-daner")
ner_model = AutoModelForTokenClassification.from_pretrained("EvanD/xlm-roberta-base-danish-ner-daner")
nlp = pipeline("ner", model=ner_model, tokenizer=tokenizer, aggregation_strategy="simple")
example = "Mit navn er Amadeus Wolfgang, og jeg bor i Berlin"
ner_results = nlp(example)
print(ner_results)
```
|
Manoj21k/microsoft-phi-2-finetuned
|
Manoj21k
| 2024-01-06T15:37:39Z | 24 | 2 |
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"nlp",
"code",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-06T14:43:39Z |
---
inference: false
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
## Finetuned Model - Manoj21k/microsoft-phi-2-finetuned
# Alpaca Datasets Instruction Finetuning
We are pleased to introduce the Manoj21k/microsoft-phi-2-finetuned model, which has undergone fine-tuning using Alpaca datasets with instructional objectives.
This process aims to enhance the model's performance in understanding and generating responses based on specific instructions. Here are key details about this finetuned model:
## Fine-Tuning Details:
# Datasets Used:
The model has been fine-tuned using Alpaca datasets, which are curated for instructional objectives.
These datasets provide diverse examples and scenarios to improve the model's ability to follow instructions accurately.
# Instructional Objectives:
The fine-tuning process emphasizes the model's proficiency in understanding and responding to prompts provided in an instructional format.
This includes scenarios where explicit instructions are given, allowing the model to generate more contextually relevant and task-specific outputs.
## Intended Use Cases:
# Instruction-Based Tasks:
The finetuned model is particularly well-suited for tasks that involve providing instructions in the prompt, such as generating detailed responses, following specific guidelines, or addressing instructional queries.
# Enhanced Controllability:
Users can expect improved controllability when using this model, making it a valuable asset for applications where precise instruction adherence is crucial.
### Code Format:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the finetuned model
finetuned_model = AutoModelForCausalLM.from_pretrained("Manoj21k/microsoft-phi-2-finetuned")
# Tokenize input with instruction and generate output
tokenizer = AutoTokenizer.from_pretrained("Manoj21k/microsoft-phi-2-finetuned")
input_text = "Instruct: Issue with the delivered product"
inputs = tokenizer(input_text, return_tensors="pt", return_attention_mask=False)
output = finetuned_model.generate(**inputs, max_length=200)
# Decode and print the generated text
decoded_output = tokenizer.batch_decode(output)[0]
print(decoded_output)
```
where the model generates the text after the comments.
**Notes:**
*The fine-tuned model is specialized for instruction-based tasks and may outperform the base Phi-2 model in scenarios that require adherence to explicit instructions.
*Users are encouraged to experiment with various instructional prompts to leverage the model's capabilities effectively.
*As always, we appreciate user feedback to continue refining and improving the model for a wide range of applications. you are using `transformers>=4.36.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring trainig data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
|
jysssacc/mt0-base_PrefixTuning_lr5e-05_bs2_epoch1_wd0.01
|
jysssacc
| 2024-01-06T15:35:02Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-base",
"base_model:adapter:bigscience/mt0-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-06T14:49:31Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-base
model-index:
- name: mt0-base_PrefixTuning_lr5e-05_bs2_epoch1_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-base_PrefixTuning_lr5e-05_bs2_epoch1_wd0.01
This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8019 | 1.0 | 314 | 0.4114 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
kroonen/phi-2-GGUF
|
kroonen
| 2024-01-06T15:20:51Z | 187 | 22 | null |
[
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2023-12-16T02:06:58Z |
---
inference: false
license: mit
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
This is a quantized GGUF version of Microsoft Phi-2 to 4_0, 8_0 bits and the converted 16 FP model.
(link to the original model : https://huggingface.co/microsoft/phi-2)
*Disclamer* : make sure to have the latest version of llama.cpp after commit b9e74f9bca5fdf7d0a22ed25e7a9626335fdfa48
|
LarryAIDraw/aylss_divineElegance
|
LarryAIDraw
| 2024-01-06T15:19:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-06T15:15:11Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/255630/aylss-tower-of-fantasytof
|
LarryAIDraw/epsilon
|
LarryAIDraw
| 2024-01-06T15:18:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-06T15:14:50Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/171167/epsilon-the-eminence-in-shadow
|
Finnish-NLP/Finnish-finetuned-whisper-models-ggml-format
|
Finnish-NLP
| 2024-01-06T15:06:07Z | 0 | 0 | null |
[
"speech-recognition",
"fi",
"license:apache-2.0",
"region:us"
] | null | 2023-08-19T09:58:51Z |
---
license: apache-2.0
language:
- fi
tags:
- speech-recognition
---
Example how to use with whisper.cpp
```bash
git clone https://github.com/ggerganov/whisper.cpp.git
cd whisper.cpp
git reset --hard 0b9af32a8b3fa7e2ae5f15a9a08f5b10394993f5
make
```
```bash
one of the following:
./main -m ggml-model-fi-tiny.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
./main -m ggml-model-fi-medium.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
./main -m ggml-model-fi-large.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
./main -m ggml-model-fi-large-v3.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
```
```bash
Sample output should look something like this:
(finetuneEnv) rasmus@DESKTOP-59O9VN1:/mnt/f/Omat_opiskelut/whisper_transformaatio/whisper.cpp$ ./main -m ggml-model-fi-medium.bin -f oma_nauhoitus_16khz.wav -l fi
whisper_init_from_file_with_params_no_state: loading model from 'ggml-model-fi-medium.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51865
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1024
whisper_model_load: n_audio_head = 16
whisper_model_load: n_audio_layer = 24
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1024
whisper_model_load: n_text_head = 16
whisper_model_load: n_text_layer = 24
whisper_model_load: n_mels = 80
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 4 (medium)
whisper_model_load: adding 1608 extra tokens
whisper_model_load: n_langs = 99
whisper_model_load: CPU buffer size = 1533.52 MB
whisper_model_load: model size = 1533.14 MB
whisper_init_state: kv self size = 132.12 MB
whisper_init_state: kv cross size = 147.46 MB
whisper_init_state: compute buffer (conv) = 25.61 MB
whisper_init_state: compute buffer (encode) = 170.28 MB
whisper_init_state: compute buffer (cross) = 7.85 MB
whisper_init_state: compute buffer (decode) = 98.32 MB
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0 |
main: processing 'oma_nauhoitus_16khz.wav' (144160 samples, 9.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = fi, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:09.000] Moi, nimeni on Rasmus ja testaan tekoälymallia, joka tunnistaa puheeni ja kirjoittaa sen tekstiksi.
```
|
mart9992/winnie22
|
mart9992
| 2024-01-06T15:01:28Z | 0 | 0 | null |
[
"endpoints_compatible",
"region:us"
] | null | 2024-01-06T14:07:51Z |
---
title: Grounded Segment Anything
emoji: 📚
colorFrom: purple
colorTo: yellow
sdk: gradio
sdk_version: 3.24.1
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.