modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 00:41:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 00:41:31
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf
|
RichardErkhov
| 2024-10-14T14:28:44Z | 17 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T03:52:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tess-2.0-Mixtral-8x7B - GGUF
- Model creator: https://huggingface.co/migtissera/
- Original model: https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Tess-2.0-Mixtral-8x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q2_K.gguf) | Q2_K | 16.12GB |
| [Tess-2.0-Mixtral-8x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [Tess-2.0-Mixtral-8x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [Tess-2.0-Mixtral-8x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K.gguf) | Q3_K | 21.0GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [Tess-2.0-Mixtral-8x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [Tess-2.0-Mixtral-8x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [Tess-2.0-Mixtral-8x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_0.gguf) | Q4_0 | 24.63GB |
| [Tess-2.0-Mixtral-8x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [Tess-2.0-Mixtral-8x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [Tess-2.0-Mixtral-8x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_K.gguf) | Q4_K | 26.49GB |
| [Tess-2.0-Mixtral-8x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [Tess-2.0-Mixtral-8x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q4_1.gguf) | Q4_1 | 27.32GB |
| [Tess-2.0-Mixtral-8x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_0.gguf) | Q5_0 | 30.02GB |
| [Tess-2.0-Mixtral-8x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [Tess-2.0-Mixtral-8x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_K.gguf) | Q5_K | 30.95GB |
| [Tess-2.0-Mixtral-8x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [Tess-2.0-Mixtral-8x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q5_1.gguf) | Q5_1 | 32.71GB |
| [Tess-2.0-Mixtral-8x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/blob/main/Tess-2.0-Mixtral-8x7B.Q6_K.gguf) | Q6_K | 35.74GB |
| [Tess-2.0-Mixtral-8x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/migtissera_-_Tess-2.0-Mixtral-8x7B-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
license: apache-2.0
---
# Join My General AI Discord (NeuroLattice):
https://discord.gg/Hz6GrwGFKD
# Tess-2.0-Mixtral
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral was trained on the mistralai/Mixtral-8x7B-v0.1 base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
<br>

<br>
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Tess-2.0-Mixtral"
output_file_path = "./conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
|
transitionGap/BIRTHCERT-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T14:20:29Z | 86 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T14:17:38Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lavesh-Akhadkar/expander
|
Lavesh-Akhadkar
| 2024-10-14T14:14:42Z | 168 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-14T14:13:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mav23/Gemma-2-Ataraxy-v2-9B-GGUF
|
mav23
| 2024-10-14T14:09:17Z | 68 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:ifable/gemma-2-Ifable-9B",
"base_model:merge:ifable/gemma-2-Ifable-9B",
"base_model:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"base_model:merge:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T13:06:56Z |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- ifable/gemma-2-Ifable-9B
- jsgreenawalt/gemma-2-9B-it-advanced-v2.1
model-index:
- name: Gemma-2-Ataraxy-v2-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 21.36
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 39.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.83
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.88
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.79
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v2-9B
name: Open LLM Leaderboard
---
# Gemma 2 Ataraxy v2 9B
Finally, after much testing, a sucessor to the first Gemma 2 Ataraxy 9B. Same kind of recipe, using the same principles, same concept as the last Ataraxy. It's not quite a better overall model, v1 is more well rounded, v2 is a little better at writing but has a little more slop and some other issues. consider this a sidegrade.

## GGUF / EXL2 Quants
Bartowski quants (imatrix): https://huggingface.co/bartowski/Gemma-2-Ataraxy-v2-9B-GGUF
Mradermacher quants (static): https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v2-9B-GGUF
Mradermacher quants (imatrix): https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v2-9B-i1-GGUF
Bartowski and mradermacher use different calibration data for their imatrix quants I believe, and the static quant of course uses none. Pick your poison.
More coming soon.
## Format
Use Gemma 2 format.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The following models were included in the merge:
* [ifable/gemma-2-Ifable-9B](https://huggingface.co/ifable/gemma-2-Ifable-9B)
* [jsgreenawalt/gemma-2-9B-it-advanced-v2.1](https://huggingface.co/jsgreenawalt/gemma-2-9B-it-advanced-v2.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: ifable/gemma-2-Ifable-9B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 42]
model: jsgreenawalt/gemma-2-9B-it-advanced-v2.1
- layer_range: [0, 42]
model: ifable/gemma-2-Ifable-9B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v2-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |19.16|
|IFEval (0-Shot) |21.36|
|BBH (3-Shot) |39.80|
|MATH Lvl 5 (4-Shot)| 0.83|
|GPQA (0-shot) |12.30|
|MuSR (0-shot) | 4.88|
|MMLU-PRO (5-shot) |35.79|
Second highest ranked open weight model in EQ-Bench.
|
mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF
|
mradermacher
| 2024-10-14T14:09:07Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T11:03:52Z |
---
base_model: ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 4.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 5.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 7.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 8.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q4_0.gguf) | i1-Q4_0 | 11.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/InternLM2_5-20B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/InternLM2_5-20B-ArliAI-RPMax-v1.1.i1-Q6_K.gguf) | i1-Q6_K | 16.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
transitionGap/PROPERTYREG-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T14:06:30Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T14:03:37Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Boru/rm_1014_70K_step_600_lr_1e6
|
Boru
| 2024-10-14T14:04:17Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:12:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF
|
mradermacher
| 2024-10-14T14:01:07Z | 11 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3-8B-TitanFusion-Mix-2",
"base_model:quantized:bunnycore/Llama-3-8B-TitanFusion-Mix-2",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T07:50:59Z |
---
base_model: bunnycore/Llama-3-8B-TitanFusion-Mix-2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3-8B-TitanFusion-Mix-2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-TitanFusion-Mix-2-GGUF/resolve/main/Llama-3-8B-TitanFusion-Mix-2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Lawrence/parler-tts-mini-v1-Lawrence-Ndizi_untransformed_r4
|
Lawrence
| 2024-10-14T13:50:07Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-14T13:49:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JefiRyan/Gemma-2B-Unsloth-mental-health-merged
|
JefiRyan
| 2024-10-14T13:49:39Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:48:54Z |
---
base_model: unsloth/gemma-2b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
---
# Uploaded model
- **Developed by:** JefiRyan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arjunanand13/PALD_Florence-30e
|
arjunanand13
| 2024-10-14T13:41:31Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:40:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
transitionGap/ROAD-TRANSPORT-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T13:37:48Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:35:00Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meandyou200175/vn-bi-encoder-yte
|
meandyou200175
| 2024-10-14T13:32:45Z | 160 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-10-14T13:32:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
transitionGap/RTIACT-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T13:30:56Z | 84 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:27:44Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oma7777/finetunellama38b
|
oma7777
| 2024-10-14T13:24:16Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:19:48Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** oma7777
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arjunanand13/PALD_Florence-25e
|
arjunanand13
| 2024-10-14T13:21:41Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:20:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bhattasp/whisper-finetuned-all-3_v2v
|
bhattasp
| 2024-10-14T13:14:24Z | 78 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-14T11:21:00Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: bhattasp/whisper-finetuned-all-3_v2v
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bhattasp/whisper-finetuned-all-3_v2v
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the AtCO2_UWB, Atcosim, Bial dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4194
- Wer: 36.3094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5723 | 1.0 | 1342 | 0.5851 | 50.4899 |
| 0.3731 | 2.0 | 2684 | 0.4669 | 37.5358 |
| 0.2662 | 3.0 | 4026 | 0.4322 | 35.8245 |
| 0.2041 | 4.0 | 5368 | 0.4189 | 36.0145 |
| 0.1957 | 5.0 | 6710 | 0.4194 | 36.3094 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
gurevichan/Carlos-Jones-3349770_001190000
|
gurevichan
| 2024-10-14T13:12:47Z | 123 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:11:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tejas-vaia/Renew_Cars_Resume
|
tejas-vaia
| 2024-10-14T13:11:59Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-14T13:07:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Himanshu99001/classify-bluesky
|
Himanshu99001
| 2024-10-14T13:10:16Z | 119 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T13:10:12Z |
---
library_name: transformers
license: apache-2.0
base_model: albert/albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: classify-bluesky
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classify-bluesky
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0173
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
- Accuracy Label Bluesky: 1.0
- Accuracy Label Specific: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
transitionGap/RERA-IN-Qwen2.5-0.5B-smallset
|
transitionGap
| 2024-10-14T13:07:05Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T13:03:56Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asif00/bangla-llama-1B-gguf-16bit
|
asif00
| 2024-10-14T13:07:01Z | 63 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"question-answering",
"bn",
"dataset:OdiaGenAI/all_combined_bengali_252k",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-09-26T23:42:35Z |
---
language:
- bn
license: apache-2.0
tags:
- text-generation-inference
- transformers
- llama
- gguf
datasets:
- OdiaGenAI/all_combined_bengali_252k
base_model:
- meta-llama/Llama-3.2-1B
pipeline_tag: question-answering
---
# Bangla LLaMA GGUF 1B-16bit
**Bangla LLaMA GGUF** is a 1-billion-parameter language model optimized for Bengali-language tasks such as context-based question answering and retrieval-augmented generation. It is derived from **LLaMA 3.2 1B** and trained on the [OdiaGenAI/all_combined_bengali_252k](https://huggingface.co/datasets/OdiaGenAI/all_combined_bengali_252k) dataset.
## Features
- **Model Size:** 1B parameters
- **Format:** GGUF (16-bit)
- **Language:** Bengali
- **Use Cases:**
- Context-based Question Answering
- Bengali Retrieval-Augmented Generation
- **Integration:** Compatible with `llama.cpp` and Hugging Face `transformers` (with conversion)
## Usage
### 1. Using with `llama.cpp`
#### Setup
```bash
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
```
#### Run Inference
```bash
./main -m path/to/asif00/bangla-llama-1B-gguf-16bit.gguf -p "আপনার প্রশ্ন এখানে"
```
### 2. Using with Hugging Face Transformers
**Note:** GGUF format is not directly supported by `transformers`. Conversion to a compatible format is required.
#### Prerequisites
```bash
pip install transformers accelerate
```
#### Example Script
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load tokenizer and model (after conversion)
tokenizer = AutoTokenizer.from_pretrained("asif00/bangla-llama-1B")
model = AutoModelForCausalLM.from_pretrained("path/to/converted-model")
prompt = """
নিচের নির্দেশনা বাংলা ভাষায় যা একটি কাজ বর্ণনা করে, এবং ইনপুটও বাংলা ভাষায় যা অতিরিক্ত প্রসঙ্গ প্রদান করে। উপযুক্তভাবে অনুরোধ পূরণ করে বাংলা ভাষায় একটি প্রতিক্রিয়া লিখুন।
### নির্দেশনা:
{}
### ইনপুট:
{}
### প্রতিক্রিয়া:
"""
def generate_response(instruction, context):
input_text = prompt.format(instruction, context)
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_length=512, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(output[0], skip_special_tokens=True)
response = response.split("### প্রতিক্রিয়া:")[-1].strip()
return response
# Example
instruction = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর সম্পর্কে একটি সংক্ষিপ্ত বিবরণ দিন।"
context = "মহাশ্বেতা দেবী ২০১৬ সালে হৃদরোগে আক্রান্ত হয়ে কলকাতায় মৃত্যুবরণ করেন।"
print(generate_response(instruction, context))
```
## Example
```python
question = "ভারতীয় বাঙালি কথাসাহিত্যিক মহাশ্বেতা দেবীর মৃত্যু কবে হয় ?"
context = (
"২০১৬ সালের ২৩ জুলাই হৃদরোগে আক্রান্ত হয়ে মহাশ্বেতা দেবী কলকাতার বেল ভিউ ক্লিনিকে ভর্তি হন। "
"সেই বছরই ২৮ জুলাই একাধিক অঙ্গ বিকল হয়ে তাঁর মৃত্যু ঘটে। তিনি মধুমেহ, সেপ্টিসেমিয়া ও মূত্র সংক্রমণ রোগেও ভুগছিলেন।"
)
answer = generate_response(question, context)
print("উত্তর:", answer)
```
**Output:**
```
উত্তর: মহাশ্বেতা দেবী ২৮ জুলাই ২০১৬ সালে মৃত্যুবরণ করেন।
```
## Limitations
- **Dataset Size:** Trained on a limited dataset, which may affect response accuracy.
- **Factuality:** May generate incorrect or nonsensical answers.
- **Language Support:** Primarily optimized for Bengali; performance may vary for other languages.
## Disclaimer
The **Bangla LLaMA GGUF** model's performance is contingent on the quality and diversity of the training data. Users should verify the information generated, especially for critical applications.
## Additional Resources
- **llama.cpp Repository:** [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
- **Hugging Face Dataset:** [OdiaGenAI/all_combined_bengali_252k](https://huggingface.co/datasets/OdiaGenAI/all_combined_bengali_252k)
- **Model Page:** [asif00/bangla-llama-1B-gguf-16bit](https://huggingface.co/asif00/bangla-llama-1B-gguf-16bit)
|
Edoigtrd/T5-nl2bash
|
Edoigtrd
| 2024-10-14T13:06:12Z | 132 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"code",
"en",
"dataset:Edoigtrd/tldr-pages",
"license:cc-by-nc-sa-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-04-04T17:06:18Z |
---
license: cc-by-nc-sa-2.0
datasets:
- Edoigtrd/tldr-pages
language:
- en
library_name: transformers
tags:
- code
---
# T5-nl2bash
T5-nl2bash is a model that generates bash commands from natural language descriptions.
This repository contains the trained model, fine-tuned from [Flan-T5 base](https://huggingface.co/google/flan-t5-base).
# Training data
- Tldr.sh pages - [tldr-dataset](https://huggingface.co/datasets/Edoigtrd/tldr-pages)
- nl2bash - [nl2bash](https://github.com/TellinaTool/nl2bash)
# Model
The model is fine-tuned from the [Flan-T5 base](https://huggingface.co/google/flan-t5-base) model.
It has been trained on a single Nvidia L4 from Scaleway, for 4 hours and 4 epochs.
The model aims to generate bash commands from natural language descriptions.
# Results
> The results are evaluated on an 500 eval subset of the data cited above.
| Metric | Flan-T5 - Nl2bash |
|--------------------|------------|
| exact | 0.064 |
| rouge | 0.530054 |
| bleu | 0.177511 |
| levenstein average | 0.612479 |
| levenstein median | 0.608696 |
# Examples :
| Task | Command |
|------|---------|
| Count time, calls, and errors for each system call and report a summary on program exit | `slurmctl -c path/to/file` |
| Search for a word in all the normal/regular files in the current folder | `find . -type f -exec grep -H` |
| Start the daemon with verbose output coming from specific files in the bluetoothd or plugins source | `bluetoothd-start --verbose path/to/file` |
| Find all directories under current directory and make them read, write, and executable for owner & group and remove read-write-execute permission for others | `find . -type d -exec chmod ug=` |
| Set the verbosity level (error, warn, info, debug, or trace) | `svn -v verbosity` |
| Show the configuration of a virtual machine, specifying its ID | `qm show vm_id` |
| Import VPN configuration | `ipconfig import config` |
| Find all the files ending with ".coffee" in the current folder and search for the words "re" in each line | `find . -name "*.coffee" -exec grep` |
| Run a program from a file | `pstool --run path/to/file` |
| Display details and metadata about a specified portable service image | `portablectl show-image path/to/image` |
| Display all the files in the current folder excluding those which are in the path of ".git" | `find . -path '*.git' -prune -` |
| Find all *.ogg files on the system ignoring the case | `find / -iname "*.ogg"` |
| Print and recursively remove the alphabetically last directory in the current directory | `rmdir -rf` |
# License :
[Creative Commons cc-by-nc-sa 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|
zelk12/MT-gemma-2-9B
|
zelk12
| 2024-10-14T13:02:05Z | 7 | 2 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-BM-gemma-2-9B",
"base_model:merge:zelk12/MT-BM-gemma-2-9B",
"base_model:zelk12/MT-GIMMA-gemma-2-9B",
"base_model:merge:zelk12/MT-GIMMA-gemma-2-9B",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-11T15:11:48Z |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- zelk12/MT-BM-gemma-2-9B
- zelk12/MT-GIMMA-gemma-2-9B
model-index:
- name: MT-gemma-2-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 79.68
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 43.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.3
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.75
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.56
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.82
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT-gemma-2-9B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-BM-gemma-2-9B](https://huggingface.co/zelk12/MT-BM-gemma-2-9B)
* [zelk12/MT-GIMMA-gemma-2-9B](https://huggingface.co/zelk12/MT-GIMMA-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-BM-gemma-2-9B
- model: zelk12/MT-GIMMA-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-BM-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.5
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zelk12__MT-gemma-2-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |30.24|
|IFEval (0-Shot) |79.68|
|BBH (3-Shot) |43.32|
|MATH Lvl 5 (4-Shot)| 0.30|
|GPQA (0-shot) |12.75|
|MuSR (0-shot) | 9.56|
|MMLU-PRO (5-shot) |35.82|
|
arjunanand13/PALD_Florence-20e
|
arjunanand13
| 2024-10-14T13:01:53Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-14T13:01:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CheeLi03/whisper-tiny-pt-5000
|
CheeLi03
| 2024-10-14T13:01:19Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:fleurs",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-14T10:19:55Z |
---
base_model: openai/whisper-tiny
datasets:
- fleurs
language:
- pt
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Tiny Portuguese 5000 - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: pt_br
split: None
args: 'config: pt split: test'
metrics:
- type: wer
value: 102.8207418551079
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Portuguese 5000 - Chee Li
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6510
- Wer: 102.8207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 625
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.1445 | 5.0251 | 1000 | 0.5040 | 109.3037 |
| 0.0131 | 10.0503 | 2000 | 0.5788 | 110.2628 |
| 0.0043 | 15.0754 | 3000 | 0.6183 | 112.4207 |
| 0.0027 | 20.1005 | 4000 | 0.6429 | 109.2708 |
| 0.0022 | 25.1256 | 5000 | 0.6510 | 102.8207 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
KhanhCong/blip_title_v1.0_e4
|
KhanhCong
| 2024-10-14T13:00:56Z | 63 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-14T13:00:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf
|
RichardErkhov
| 2024-10-14T13:00:02Z | 5 | 0 | null |
[
"gguf",
"arxiv:2405.03548",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T02:08:02Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MAmmoTH2-8x7B - GGUF
- Model creator: https://huggingface.co/TIGER-Lab/
- Original model: https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MAmmoTH2-8x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q2_K.gguf) | Q2_K | 16.12GB |
| [MAmmoTH2-8x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [MAmmoTH2-8x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [MAmmoTH2-8x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [MAmmoTH2-8x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [MAmmoTH2-8x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K.gguf) | Q3_K | 21.0GB |
| [MAmmoTH2-8x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [MAmmoTH2-8x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [MAmmoTH2-8x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [MAmmoTH2-8x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_0.gguf) | Q4_0 | 24.63GB |
| [MAmmoTH2-8x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [MAmmoTH2-8x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [MAmmoTH2-8x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_K.gguf) | Q4_K | 26.49GB |
| [MAmmoTH2-8x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [MAmmoTH2-8x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q4_1.gguf) | Q4_1 | 27.32GB |
| [MAmmoTH2-8x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_0.gguf) | Q5_0 | 30.02GB |
| [MAmmoTH2-8x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [MAmmoTH2-8x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_K.gguf) | Q5_K | 30.95GB |
| [MAmmoTH2-8x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [MAmmoTH2-8x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q5_1.gguf) | Q5_1 | 32.71GB |
| [MAmmoTH2-8x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/blob/main/MAmmoTH2-8x7B.Q6_K.gguf) | Q6_K | 35.74GB |
| [MAmmoTH2-8x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/TIGER-Lab_-_MAmmoTH2-8x7B-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
license: mit
language:
- en
datasets:
- TIGER-Lab/WebInstructSub
metrics:
- accuracy
library_name: transformers
---
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------|
| **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 |
| **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 |
| **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval.
## Chat Format
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
But we also found that the model is not very sensitive to the chat template.
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
```
|
an303042/MuranoGlass_v1
|
an303042
| 2024-10-14T12:53:08Z | 8 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] |
text-to-image
| 2024-10-14T12:43:14Z |
---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1728866446928__000003000_0.jpg
text: wounded centaur, mythical creature Murano Glass style
- output:
url: samples/1728866470561__000003000_1.jpg
text: ruins of athens, snake Murano Glass style
- output:
url: samples/1728866494488__000003000_2.jpg
text: silver vampire sword Murano Glass style
- output:
url: samples/1728866518162__000003000_3.jpg
text: Dog and fire hydrant, Murano Glass style
- output:
url: samples/1728866541826__000003000_4.jpg
text: a NYC yellow taxi cab, Murano Glass style
- output:
url: samples/1728866565851__000003000_5.jpg
text: juicy cheeseburger, Murano Glass style
base_model: black-forest-labs/FLUX.1-dev
trigger: Murano Glass style
instance_prompt: Murano Glass style
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# MuranoGlass_v1
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `an303042`.
<Gallery />
## Trigger words
You should use `Murano Glass style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/an303042/MuranoGlass_v1/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
Habib94/gpt3-chat-kidney-desease
|
Habib94
| 2024-10-14T12:52:05Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-08T17:06:13Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gurevichan/Barbara-Gutierrez-397864_001190000
|
gurevichan
| 2024-10-14T12:51:18Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T12:50:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nesemenpolkov/msu-wiki-ner
|
nesemenpolkov
| 2024-10-14T12:48:37Z | 271 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"msu",
"wiki",
"fine-tuned",
"ru",
"dataset:RCC-MSU/collection3",
"base_model:Babelscape/wikineural-multilingual-ner",
"base_model:finetune:Babelscape/wikineural-multilingual-ner",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-14T09:02:45Z |
---
library_name: transformers
tags:
- ner
- msu
- wiki
- fine-tuned
datasets:
- RCC-MSU/collection3
language:
- ru
metrics:
- precision
- recall
- f1
base_model:
- Babelscape/wikineural-multilingual-ner
pipeline_tag: token-classification
---
# Fine-tuned multilingual model for russian language NER
This is the model card for fine-tuned [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner), which has multilingual mBERT as its base.
I`ve fine-tuned it using [RCC-MSU/collection3](https://huggingface.co/datasets/RCC-MSU/collection3) dataset for token-classification task. The dataset has BIO-pattern and following labels:
```python
label_names = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
```
## Model Details
Fine-tuning was proceeded in 3 epochs, and computed next metrics:
| Epoch | Training Loss | Validation Loss | Precision | Recall | F1 | Accuracy |
| ----- | ------------- | --------------- | --------- | ------ | -- | -------- |
| 1 | 0.041000 | 0.032810 | 0.959569 | 0.974253 | 0.966855 | 0.993325 |
| 2 | 0.020800 | 0.028395 | 0.959569 | 0.974253 | 0.966855 | 0.993325 |
| 3 | 0.010500 | 0.029138 | 0.963239 | 0.973767 | 0.968474 | 0.993247 |
To avoid over-fitting due to a small amount of training samples, i used high weight_decay = 0.1.
## Basic usage
So, you can easily use this model with pipeline for 'token-classification' task.
```python
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
from datasets import load_dataset
model_ckpt = "nesemenpolkov/msu-wiki-ner"
label_names = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
id2label = {i: label for i, label in enumerate(label_names)}
label2id = {v: k for k, v in id2label.items()}
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
model = AutoModelForTokenClassification.from_pretrained(
model_ckpt,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True
)
pipe = pipeline(
task="token-classification",
model=model,
tokenizer=tokenizer,
device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
aggregation_strategy="simple"
)
demo_sample = "Этот Иван Иванов, в паспорте Иванов И.И."
with torch.no_grad():
out = pipe(demo_sample)
```
## Bias, Risks, and Limitations
This model is finetuned version of [Babelscape/wikineural-multilingual-ner](https://huggingface.co/Babelscape/wikineural-multilingual-ner), on a russian language NER dataset [RCC-MSU/collection3](https://huggingface.co/datasets/RCC-MSU/collection3). It can show low scores on another language texts.
## Citation [optional]
```
@inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "Fine-tuned multilingual model for russian language NER.",
author = "nesemenpolkov",
booktitle = "Detecting names in noisy and dirty data.",
month = oct,
year = "2024",
address = "Moscow, Russian Federation",
}
```
|
tanbinh2210/vietnamese-bi-encoder-synthetic
|
tanbinh2210
| 2024-10-14T12:48:01Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:198037",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:bkai-foundation-models/vietnamese-bi-encoder",
"base_model:finetune:bkai-foundation-models/vietnamese-bi-encoder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-14T12:47:42Z |
---
base_model: bkai-foundation-models/vietnamese-bi-encoder
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:198037
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Lệ_phí thông_báo trên phương_tiện_thông_tin_đại_chúng do ai chịu
và quy_định pháp_luật quy_định về việc này như_thế_nào ?
sentences:
- động khai_thác nhưng không đạt được hiệu_quả đầu_tư tối_thiểu khi áp_dụng các
điều_kiện kinh_tế , kỹ_thuật của hợp_đồng dầu_khí được hưởng các chính_sách ưu_đãi
đầu_tư ; g ) Lô dầu_khí có đối_tượng là dầu_khí phi truyền_thống . 3 . Thủ_tướng
Chính_phủ ban_hành danh_mục các lô , mỏ dầu_khí được hưởng chính_sách ưu_đãi đầu_tư
và ưu_đãi đầu_tư đặc_biệt . 4 . Chính_phủ quy_định chi_tiết Điều này . Điều 54
. Chính_sách ưu_đãi 1 . Chính_sách ưu_đãi áp_dụng đối_với các lô , mỏ dầu_khí
được thực_hiện thông_qua hợp_đồng dầu_khí . 2 . Hợp_đồng dầu_khí đối_với lô ,
mỏ dầu_khí được hưởng chính_sách ưu_đãi đầu_tư được áp_dụng mức thuế_suất thuế_thu_nhập
doanh_nghiệp là 32 % , thuế_suất thuế xuất_khẩu dầu_thô là 10 % và mức thu_hồi
chi_phí tối_đa 70 % sản_lượng dầu_khí khai_thác được trong năm . 3 . Hợp_đồng
dầu_khí đối_với lô , mỏ dầu_khí được hưởng chính_sách ưu_đãi đầu_tư đặc_biệt được
áp_dụng mức thuế_suất thuế_thu_nhập doanh_nghiệp là 25 % , thuế_suất thuế xuất_khẩu
dầu_thô là 5 % và mức thu_hồi chi_phí tối_đa 80 % sản_lượng dầu_khí khai_thác
được trong năm .
- cùng của cơ_quan , tổ_chức được cấp , tống_đạt , thông_báo ; b ) Niêm_yết bản_sao
tại nơi cư_trú hoặc nơi cư_trú cuối_cùng của cá_nhân , nơi có trụ_sở hoặc trụ_sở
cuối_cùng của cơ_quan , tổ_chức được cấp , tống_đạt , thông_báo ; c ) Lập biên_bản
về việc thực_hiện thủ_tục niêm_yết công_khai , trong đó ghi rõ ngày , tháng ,
năm niêm_yết . 3 . Thời_hạn niêm_yết công_khai văn_bản tố_tụng là 15 ngày , kể
từ ngày niêm_yết . Điều 180 . Thủ_tục thông_báo trên phương_tiện_thông_tin_đại_chúng
1 . Việc thông_báo trên phương_tiện_thông_tin_đại_chúng được thực_hiện khi pháp_luật
có quy_định hoặc có căn_cứ xác_định là việc niêm_yết công_khai không bảo_đảm cho
người được cấp , tống_đạt , thông_báo nhận được thông_tin về văn_bản cần được
cấp , tống_đạt , thông_báo . 2 . Việc thông_báo trên phương_tiện_thông_tin_đại_chúng
có_thể được thực_hiện nếu có yêu_cầu của các đương_sự khác . Trong trường_hợp
này , lệ_phí thông_báo trên phương_tiện_thông_tin_đại_chúng do đương_sự có yêu_cầu
thông_báo chịu .
- Trung_tâm Lưu_ký Chứng_khoán , thành_viên lưu_ký và Ngân_hàng chỉ_định thanh_toán
được thu các loại phí cung_cấp dịch_vụ theo quy_định của pháp_luật . Chương VI_Công_bố
thông_tin Điều 51 . Đối_tượng , nội_dung , phương_tiện công_bố thông_tin 1 . Trung_tâm
Giao_dịch Chứng_khoán , Sở Giao_dịch Chứng_khoán , tổ_chức phát_hành , tổ_chức
niêm_yết , công_ty chứng_khoán , công_ty quản_lý quỹ phải thực_hiện chế_độ công_bố
thông_tin kịp_thời , chính_xác và theo đúng quy_định của pháp_luật . 2 . Tổ_chức
phát_hành , tổ_chức niêm_yết có trách_nhiệm công_bố rõ_ràng các thông_tin về quyền
biểu_quyết , quyền đăng_ký mua chứng_khoán , quyền chuyển_đổi chứng_khoán và các
quyền khác cho tất_cả các cổ_đông biết ; quy_định và thực_hiện các quy_tắc công_bố
công_khai về sở_hữu cổ_phần của các thành_viên Hội_đồng_quản_trị , Ban Giám_đốc
, cổ_đông lớn và những người có liên_quan .
- source_sentence: Xã_hội có trách_nhiệm gì trong việc hỗ_trợ và phối_hợp với nhà_trường
để giáo_dục học_sinh , trẻ mầm_non ?
sentences:
- an_ninh nhân_dân , thế_trận an_ninh nhân_dân . 4 . Kết_hợp quốc_phòng với kinh_tế
- xã_hội và kinh_tế - xã_hội với quốc_phòng . 5 . Kết_hợp quốc_phòng với an_ninh
, đối_ngoại . Điều 4 . Chính_sách của Nhà_nước về quốc_phòng 1 . Củng_cố , tăng_cường
nền quốc_phòng toàn dân , sức_mạnh quân_sự để xây_dựng , bảo_vệ vững_chắc_Tổ_quốc_Việt_Nam
xã_hội_chủ_nghĩa , góp_phần bảo_vệ hòa_bình ở khu_vực và trên thế_giới . 2 . Thực_hiện
độc_lập , chủ_quyền , thống_nhất , toàn_vẹn lãnh_thổ , bao_gồm đất_liền , hải_đảo
, vùng_biển và vùng_trời ; thực_hiện chính_sách hòa_bình , tự_vệ ; sử_dụng các
biện_pháp chính_đáng , thích_hợp để phòng_ngừa , ngăn_chặn , đẩy_lùi , đánh_bại
mọi âm_mưu và hành_vi xâm_lược .
- cam_kết giữa nhà_trường , gia_đình và các đoàn_thể , tổ_chức xã_hội ngoài cơ_sở
giáo_dục thúc_đẩy và đảm_bảo tính dân_chủ , bình_đẳng trong việc tiếp_cận giáo_dục
đối_với mọi trẻ không phân_biệt giới_tính , vùng_miền và sắc_tộc . 2 . Nhà_trường
chủ_động tuyên_truyền , vận_động cha_mẹ hoặc người giám_hộ trẻ đưa trẻ đến lớp
chuyên cần , đúng giờ ; thông_báo tới gia_đình trẻ về kế_hoạch và hoạt_động giáo_dục
trong thời_gian chuẩn_bị cho trẻ vào lớp Một ; thường_xuyên trao_đổi với cha_mẹ
hoặc người giám_hộ của trẻ về tình_hình rèn_luyện , học_tập và thống_nhất biện_pháp
hỗ_trợ trẻ ; tạo điều_kiện để cha_mẹ hoặc người giám_hộ trẻ đến lớp tìm_hiểu và
hỗ_trợ trẻ học_tập ; huy_động , tạo điều_kiện để gia_đình trẻ tham_gia xây_dựng
nhà_trường theo đúng quy_định của pháp_luật và sử_dụng hiệu_quả các nguồn_lực
đóng_góp . 3 . Nhà_trường tuyên_truyền phổ_biến đối_với các đoàn_thể , tổ_chức
xã_hội trên địa_bàn về định_hướng , kế_hoạch của nhà_trường trong việc dạy và
học tiếng Việt cho trẻ trước khi vào lớp Một , những hoạt_động cần sự hợp_tác
, hỗ_trợ từ các đoàn_thể , tổ_chức xã_hội .
- 'của học_sinh 1 . Tiếp_nhận thông_tin về kết_quả học_tập , rèn_luyện của con hoặc
người được giám_hộ . 2 . Tham_gia hoạt_động giáo_dục theo kế_hoạch của nhà_trường
; tham_gia hoạt_động của ban đại_diện cha_mẹ học_sinh trong nhà_trường . 3 . Phối_hợp
với nhà_trường , cơ_quan quản_lý giáo_dục giải_quyết các vấn_đề có liên_quan đến
việc giáo_dục con hoặc người được giám_hộ theo quy_định . Điều 92 . Ban đại_diện
cha_mẹ học_sinh , trẻ mầm_non 1 . Ban đại_diện cha_mẹ học_sinh , trẻ mầm_non được
tổ_chức trong mỗi năm_học ở giáo_dục_phổ_thông và giáo_dục mầm_non , do cha_mẹ
hoặc người giám_hộ học_sinh , trẻ mầm_non từng lớp , từng trường cử ra để phối_hợp
với nhà_trường trong việc chăm_sóc , nuôi_dưỡng , giáo_dục học_sinh , trẻ mầm_non
và hoạt_động theo quy_định của Bộ_trưởng Bộ_Giáo_dục_và_Đào_tạo . 2 . Không tổ_chức
ban đại_diện cha_mẹ học_sinh , trẻ mầm_non liên_trường và ở các cấp hành_chính
. Điều 93 . Trách_nhiệm của xã_hội 1 . Cơ_quan , tổ_chức , cá_nhân có trách_nhiệm
sau đây :'
- source_sentence: Luật quy_định trường_hợp nào cổ_đông sáng_lập được chuyển_nhượng
cổ_phần phổ_thông mà không cần sự chấp_thuận của Đại_hội_đồng cổ_đông ?
sentences:
- 'tự quản_lý kinh_tế liên_quan đến tài_chính , kế_toán mà chưa được xóa án_tích
; người đang bị áp_dụng biện_pháp xử_lý hành_chính giáo_dục tại xã , phường ,
thị_trấn , đưa vào cơ_sở giáo_dục bắt_buộc , cơ_sở cai_nghiện bắt_buộc ; c ) Người
đã bị kết_án về tội_phạm nghiêm_trọng xâm_phạm trật_tự quản_lý kinh_tế mà chưa
được xóa án_tích ; d ) Người bị xử_phạt vi_phạm hành_chính về hành_vi vi_phạm
pháp_luật về tài_chính , kế_toán , kiểm_toán mà chưa hết thời_hạn 06 tháng , kể
từ ngày chấp_hành xong quyết_định xử_phạt trong trường_hợp bị phạt cảnh_cáo hoặc
chưa hết thời_hạn 01 năm , kể từ ngày chấp_hành xong quyết_định xử_phạt hành_chính
khác ; đ ) Người bị đình_chỉ hành_nghề dịch_vụ kế_toán . Điều 59 . Doanh_nghiệp
kinh_doanh dịch_vụ kế_toán 1 . Doanh_nghiệp kinh_doanh dịch_vụ kế_toán được thành_lập
theo các loại_hình sau đây : a ) Công_ty trách_nhiệm hữu_hạn hai thành_viên trở
lên ; b ) Công_ty hợp_danh ; c ) Doanh_nghiệp tư_nhân .'
- thiết phải có văn_bản chuyển_nhượng ; hoặc tùy từng thời_điểm Hội_đồng_quản_trị
có_thể ban_hành các quy_định khác thay_thế cho các quy_định tương_ứng trong Điều_lệ
này về chứng_chỉ và chuyển_nhượng cổ_phần . Điều 7 . Chuyển_nhượng cổ_phần . 1
. Tất_cả các cổ_phần đều có_thể được tự_do chuyển_nhượng trừ khi Điều_lệ này và
luật_pháp có quy_định khác . Tất_cả các cổ_phiếu niêm_yết trên Trung_tâm Giao_dịch
chứng_khoán sẽ được chuyển_nhượng theo các quy_định của Ủy_ban Chứng_khoán Nhà_nước
và Trung_tâm Giao_dịch chứng_khoán . 2 . [ Trong vòng 3 năm kể từ ngày thành_lập
, các cổ_đông sáng_lập phải cùng nhau nắm giữ ít_nhất 20 % tổng_số cổ_phần phổ_thông
có_thể chuyển_nhượng và trường_hợp chuyển_nhượng số cổ_phần này cho những người
không phải là thành_viên sáng_lập thì cần phải có sự đồng_ý của Đại_hội_đồng cổ_đông
trong Công_ty .
- 'ký của người đại_diện theo pháp_luật hoặc các cổ_đông phổ_thông của công_ty đó
. 2 . Các cổ_đông sáng_lập phải cùng nhau đăng_ký mua ít_nhất 20 % tổng_số cổ_phần
phổ_thông được quyền chào_bán khi đăng_ký thành_lập doanh_nghiệp . 3 . Trong thời_hạn
03 năm kể từ ngày công_ty được cấp Giấy chứng_nhận đăng_ký doanh_nghiệp , cổ_phần
phổ_thông của cổ_đông sáng_lập được tự_do chuyển_nhượng cho cổ_đông sáng_lập khác
và chỉ được chuyển_nhượng cho người không phải là cổ_đông sáng_lập nếu được sự
chấp_thuận của Đại_hội_đồng cổ_đông . Trường_hợp này , cổ_đông sáng_lập dự_định
chuyển_nhượng cổ_phần phổ_thông thì không có quyền biểu_quyết về việc chuyển_nhượng
cổ_phần đó . 4 . Các hạn_chế quy_định tại khoản 3 Điều này không áp_dụng đối_với
cổ_phần phổ_thông sau đây : a ) Cổ_phần mà cổ_đông sáng_lập có thêm sau khi đăng_ký
thành_lập doanh_nghiệp ; b ) Cổ_phần đã được chuyển_nhượng cho người khác không
phải là cổ_đông sáng_lập . Điều 121 . Cổ_phiếu'
- source_sentence: Người giám_hộ có quyền đại_diện theo pháp_luật cho người được giám_hộ
khi nào ?
sentences:
- 'thực_hiện Chương_trình tổng_thể về thực_hành tiết_kiệm , chống lãng_phí hàng
năm và dài_hạn ; xác_định , định_hướng mục_tiêu , chỉ_tiêu tiết_kiệm và yêu_cầu
chống lãng_phí chung trong cả nước gắn với kế_hoạch phát_triển kinh_tế - xã_hội
. 3 . Chỉ_đạo các bộ , cơ_quan ngang bộ , cơ_quan thuộc Chính_phủ , Ủy_ban_nhân_dân
tỉnh , thành_phố trực_thuộc trung_ương : phối_hợp trong việc thực_hiện nhiệm_vụ
để bảo_đảm thực_hành tiết_kiệm , chống lãng_phí ; xây_dựng và thực_hiện Chương_trình
thực_hành tiết_kiệm , chống lãng_phí hàng năm và dài_hạn thuộc phạm_vi , lĩnh_vực
quản_lý , quyết_định chỉ_tiêu tiết_kiệm cụ_thể của ngành , lĩnh_vực trọng_yếu
trong Chương_trình thực_hành tiết_kiệm , chống lãng_phí của các bộ , ngành và
địa_phương . 4 . Quy_định chế_độ báo_cáo việc thực_hành tiết_kiệm , chống lãng_phí
; tổng_hợp báo_cáo Quốc_hội kết_quả thực_hành tiết_kiệm , chống lãng_phí hàng
năm tại kỳ họp đầu năm sau . 5 . Tổ_chức công_tác thanh_tra , kiểm_tra việc thực_hành
tiết_kiệm , chống lãng_phí ; xử_lý kịp_thời , nghiêm_minh và công_khai việc xử_lý
đối_với hành_vi vi_phạm pháp_luật về thực_hành tiết_kiệm , chống lãng_phí theo
quy_định của pháp_luật .'
- 'của cá_nhân 1 . Cha , mẹ đối_với con chưa thành_niên . 2 . Người giám_hộ đối_với
người được giám_hộ . Người giám_hộ của người có khó_khăn trong nhận_thức , làm
chủ hành_vi là người đại_diện theo pháp_luật nếu được Tòa_án chỉ_định . 3 . Người
do Tòa_án chỉ_định trong trường_hợp không xác_định được người đại_diện quy_định
tại khoản 1 và khoản 2 Điều này . 4 . Người do Tòa_án chỉ_định đối_với người bị
hạn_chế năng_lực hành_vi dân_sự . Điều 137 . Đại_diện theo pháp_luật của pháp_nhân
1 . Người đại_diện theo pháp_luật của pháp_nhân bao_gồm : a ) Người được pháp_nhân
chỉ_định theo điều_lệ ; b ) Người có thẩm_quyền đại_diện theo quy_định của pháp_luật
; c ) Người do Tòa_án chỉ_định trong quá_trình tố_tụng tại Tòa_án . 2 . Một pháp_nhân
có_thể có nhiều người đại_diện theo pháp_luật và mỗi người đại_diện có quyền đại_diện
cho pháp_nhân theo quy_định tại Điều 140 và Điều 141 của Bộ_luật này . Điều 138
. Đại_diện theo ủy quyền 1 . Cá_nhân , pháp_nhân có_thể ủy quyền cho cá_nhân ,
pháp_nhân khác xác_lập , thực_hiện giao_dịch dân_sự .'
- 'chủ hành_vi 1 . Người giám_hộ của người mất năng_lực hành_vi dân_sự có các nghĩa_vụ
sau đây : a ) Chăm_sóc , bảo_đảm việc điều_trị bệnh cho người được giám_hộ ; b
) Đại_diện cho người được giám_hộ trong các giao_dịch dân_sự ; c ) Quản_lý tài_sản
của người được giám_hộ ; d ) Bảo_vệ quyền , lợi_ích hợp_pháp của người được giám_hộ
. 2 . Người giám_hộ của người có khó_khăn trong nhận_thức , làm chủ hành_vi có
nghĩa_vụ theo quyết_định của Tòa_án trong số các nghĩa_vụ quy_định tại khoản 1
Điều này . Điều 58 . Quyền của người giám_hộ 1 . Người giám_hộ của người chưa
thành_niên , người mất năng_lực hành_vi dân_sự có các quyền sau đây : a ) Sử_dụng
tài_sản của người được giám_hộ để chăm_sóc , chi_dùng cho những nhu_cầu thiết_yếu
của người được giám_hộ ; b ) Được thanh_toán các chi_phí hợp_lý cho việc quản_lý
tài_sản của người được giám_hộ ; c ) Đại_diện cho người được giám_hộ trong việc
xác_lập , thực_hiện giao_dịch dân_sự và thực_hiện các quyền khác theo quy_định
của pháp_luật nhằm bảo_vệ quyền , lợi_ích hợp_pháp của người được giám_hộ .'
- source_sentence: Khi tham_gia tố_tụng , đương_sự có nghĩa_vụ cung_cấp chứng_cứ như_thế_nào
để bảo_vệ quyền_lợi của mình ?
sentences:
- vi_phạm pháp_luật của cơ_quan , người có thẩm_quyền trong việc thi_hành án hình_sự
; e ) Kiến_nghị , yêu_cầu cơ_quan , tổ_chức , cá_nhân chấm_dứt , khắc_phục vi_phạm
pháp_luật trong thi_hành án hình_sự ; xử_lý nghiêm_minh người vi_phạm ; g ) Khởi_tố
hoặc yêu_cầu Cơ_quan điều_tra khởi_tố vụ án hình_sự khi phát_hiện vụ_việc có dấu_hiệu
tội_phạm trong thi_hành án hình_sự theo quy_định của pháp_luật ; h ) Thực_hiện
nhiệm_vụ , quyền_hạn khác trong kiểm_sát thi_hành án hình_sự theo quy_định của
pháp_luật về thi_hành án hình_sự . Điều 26 . Trách_nhiệm thực_hiện yêu_cầu , kiến_nghị
, kháng_nghị , quyết_định của Viện_kiểm_sát nhân_dân trong việc thi_hành án hình_sự
1 . Đối_với yêu_cầu ra quyết_định thi_hành án hình_sự đúng quy_định của pháp_luật
, yêu_cầu cung_cấp hồ_sơ , tài_liệu có liên_quan đến việc thi_hành án hình_sự
, cơ_quan , tổ_chức , cá_nhân được yêu_cầu phải thực_hiện ngay .
- 'giải_quyết khiếu_nại về quyết_định xử_lý vụ_việc cạnh_tranh , bản_sao quyết_định
giải_quyết khiếu_nại ( nếu có ) , cung_cấp chứng_cứ khác để bảo_vệ quyền , lợi_ích
hợp_pháp của mình ; trường_hợp không cung_cấp được thì phải nêu rõ lý_do . 2 .
Người bị kiện có nghĩa_vụ cung_cấp cho Tòa_án hồ_sơ giải_quyết khiếu_nại ( nếu
có ) và bản_sao các văn_bản , tài_liệu mà căn_cứ vào đó để ra quyết_định hành_chính
, quyết_định kỷ_luật buộc thôi_việc , quyết_định giải_quyết khiếu_nại về quyết_định
xử_lý vụ_việc cạnh_tranh hoặc có hành_vi hành_chính . 3 . Người có quyền_lợi ,
nghĩa_vụ liên_quan có nghĩa_vụ cung_cấp chứng_cứ để bảo_vệ quyền , lợi_ích hợp_pháp
của mình . Điều 79 . Những tình_tiết , sự_kiện không phải chứng_minh 1 . Những
tình_tiết , sự_kiện sau đây không phải chứng_minh : a ) Những tình_tiết , sự_kiện
rõ_ràng mà mọi người đều biết và được Tòa_án thừa_nhận ; b ) Những tình_tiết ,
sự_kiện đã được xác_định trong bản_án , quyết_định của Tòa_án đã có hiệu_lực pháp_luật
;'
- 'ra thì bên đương_sự đó không phải chứng_minh . 3 . Đương_sự có người đại_diện
tham_gia tố_tụng thì sự thừa_nhận của người đại_diện được coi là sự thừa_nhận
của đương_sự nếu không vượt quá phạm_vi đại_diện . Điều 93 . Chứng_cứ Chứng_cứ
trong vụ_việc dân_sự là những gì có thật được đương_sự và cơ_quan , tổ_chức ,
cá_nhân khác giao_nộp , xuất_trình cho Tòa_án trong quá_trình tố_tụng hoặc do
Tòa_án thu_thập được theo trình_tự , thủ_tục do Bộ_luật này quy_định và được Tòa_án
sử_dụng làm căn_cứ để xác_định các tình_tiết khách_quan của vụ án cũng như xác_định
yêu_cầu hay sự phản_đối của đương_sự là có căn_cứ và hợp_pháp . Điều 94 . Nguồn
chứng_cứ Chứng_cứ được thu_thập từ các nguồn sau đây : 1 . Tài_liệu đọc được ,
nghe được , nhìn được , dữ_liệu điện_tử ; 2 . Vật_chứng ; 3 . Lời khai của đương_sự
; 4 . Lời khai của người làm_chứng ; 5 . Kết_luận giám_định ; 6 . Biên_bản ghi
kết_quả thẩm_định tại_chỗ ; 7 . Kết_quả định_giá tài_sản , thẩm_định giá tài_sản
; 8 . Văn_bản ghi_nhận sự_kiện , hành_vi pháp_lý_do người có chức_năng lập ; 9
. Văn_bản công_chứng , chứng_thực ;'
---
# SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder) <!-- at revision 84f9d9ada0d1a3c37557398b9ae9fcedcdf40be0 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tanbinh2210/vietnamese-bi-encoder-synthetic")
# Run inference
sentences = [
'Khi tham_gia tố_tụng , đương_sự có nghĩa_vụ cung_cấp chứng_cứ như_thế_nào để bảo_vệ quyền_lợi của mình ?',
'ra thì bên đương_sự đó không phải chứng_minh . 3 . Đương_sự có người đại_diện tham_gia tố_tụng thì sự thừa_nhận của người đại_diện được coi là sự thừa_nhận của đương_sự nếu không vượt quá phạm_vi đại_diện . Điều 93 . Chứng_cứ Chứng_cứ trong vụ_việc dân_sự là những gì có thật được đương_sự và cơ_quan , tổ_chức , cá_nhân khác giao_nộp , xuất_trình cho Tòa_án trong quá_trình tố_tụng hoặc do Tòa_án thu_thập được theo trình_tự , thủ_tục do Bộ_luật này quy_định và được Tòa_án sử_dụng làm căn_cứ để xác_định các tình_tiết khách_quan của vụ án cũng như xác_định yêu_cầu hay sự phản_đối của đương_sự là có căn_cứ và hợp_pháp . Điều 94 . Nguồn chứng_cứ Chứng_cứ được thu_thập từ các nguồn sau đây : 1 . Tài_liệu đọc được , nghe được , nhìn được , dữ_liệu điện_tử ; 2 . Vật_chứng ; 3 . Lời khai của đương_sự ; 4 . Lời khai của người làm_chứng ; 5 . Kết_luận giám_định ; 6 . Biên_bản ghi kết_quả thẩm_định tại_chỗ ; 7 . Kết_quả định_giá tài_sản , thẩm_định giá tài_sản ; 8 . Văn_bản ghi_nhận sự_kiện , hành_vi pháp_lý_do người có chức_năng lập ; 9 . Văn_bản công_chứng , chứng_thực ;',
'giải_quyết khiếu_nại về quyết_định xử_lý vụ_việc cạnh_tranh , bản_sao quyết_định giải_quyết khiếu_nại ( nếu có ) , cung_cấp chứng_cứ khác để bảo_vệ quyền , lợi_ích hợp_pháp của mình ; trường_hợp không cung_cấp được thì phải nêu rõ lý_do . 2 . Người bị kiện có nghĩa_vụ cung_cấp cho Tòa_án hồ_sơ giải_quyết khiếu_nại ( nếu có ) và bản_sao các văn_bản , tài_liệu mà căn_cứ vào đó để ra quyết_định hành_chính , quyết_định kỷ_luật buộc thôi_việc , quyết_định giải_quyết khiếu_nại về quyết_định xử_lý vụ_việc cạnh_tranh hoặc có hành_vi hành_chính . 3 . Người có quyền_lợi , nghĩa_vụ liên_quan có nghĩa_vụ cung_cấp chứng_cứ để bảo_vệ quyền , lợi_ích hợp_pháp của mình . Điều 79 . Những tình_tiết , sự_kiện không phải chứng_minh 1 . Những tình_tiết , sự_kiện sau đây không phải chứng_minh : a ) Những tình_tiết , sự_kiện rõ_ràng mà mọi người đều biết và được Tòa_án thừa_nhận ; b ) Những tình_tiết , sự_kiện đã được xác_định trong bản_án , quyết_định của Tòa_án đã có hiệu_lực pháp_luật ;',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 198,037 training samples
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
* Approximate statistics based on the first 1000 samples:
| | query | pos | neg |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.59 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 179.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 178.89 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| query | pos | neg |
|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Quy_định pháp_luật về lập kế_hoạch sử_dụng đất có những nguyên_tắc nào ?</code> | <code>d ) Khai_thác hợp_lý tài_nguyên thiên_nhiên ; thích_ứng với biến_đổi khí_hậu ; đ ) Nội_dung phân_bổ và sử_dụng đất trong quy_hoạch ngành quốc_gia , quy_hoạch_vùng , quy_hoạch tỉnh phải bảo_đảm phù_hợp với quy_hoạch sử_dụng đất quốc_gia . 2 . Việc lập kế_hoạch sử_dụng đất phải tuân_thủ các nguyên_tắc sau đây : a ) Phù_hợp với chiến_lược , kế_hoạch phát_triển kinh_tế-xã hội , quốc_phòng , an_ninh ; b ) Kế_hoạch sử_dụng đất phải phù_hợp với quy_hoạch sử_dụng đất cùng cấp đã được cơ_quan nhà_nước có thẩm_quyền phê_duyệt ; đối_với kế_hoạch sử_dụng đất cấp tỉnh phải phù_hợp với phương_án phân_bổ , khoanh vùng_đất_đai trong quy_hoạch tỉnh ; c ) Sử_dụng đất tiết_kiệm và có hiệu_quả ; d ) Khai_thác hợp_lý tài_nguyên thiên_nhiên và bảo_vệ môi_trường ; thích_ứng với biến_đổi khí_hậu ; đ ) Bảo_vệ , tôn_tạo di_tích lịch_sử - văn_hóa , danh_lam_thắng_cảnh ; e ) Kế_hoạch của ngành , lĩnh_vực , địa_phương có sử_dụng đất phải bảo_đảm phù_hợp với quy_hoạch , kế_hoạch sử_dụng đất đã được cơ_quan nhà_nước có thẩm_quyền quyết_định , phê_duyệt . Điều 36 . Hệ_thống quy_hoạch , kế_hoạch sử_dụng đất</code> | <code>phê_duyệt và điều_chỉnh quy_hoạch có tính_chất kỹ_thuật , chuyên_ngành để triển_khai các nội_dung quy_định tại khoản 2 Điều này được thực_hiện theo quy_định của pháp_luật có liên_quan . Điều 24 . Nội_dung quy_hoạch sử_dụng đất quốc_gia 1 . Nội_dung quy_hoạch sử_dụng đất quốc_gia xác_định việc phân_bổ và tổ_chức không_gian sử_dụng đất cho các mục_tiêu phát_triển kinh_tế - xã_hội , quốc_phòng , an_ninh , bảo_vệ môi_trường , phòng , chống thiên_tai và ứng_phó biến_đổi khí_hậu trên cơ_sở tiềm_năng đất_đai và nhu_cầu sử_dụng đất của các ngành , lĩnh_vực mang tính liên_vùng , liên tỉnh . 2 . Quy_hoạch sử_dụng đất quốc_gia bao_gồm những nội_dung chủ_yếu sau đây : a ) Phân_tích , đánh_giá về các yếu_tố , điều_kiện tự_nhiên , nguồn_lực , bối_cảnh trực_tiếp tác_động và thực_trạng sử_dụng đất của các ngành , lĩnh_vực ; b ) Dự_báo xu_thế biến_động của việc sử_dụng đất ; c ) Xác_định các quan_điểm và mục_tiêu sử_dụng đất trong thời_kỳ mới ; d ) Định_hướng phân_bổ không_gian và chỉ_tiêu sử_dụng đất nông_nghiệp , đất rừng ; đ ) Định_hướng phân_bổ không_gian và chỉ_tiêu sử_dụng đất phi_nông_nghiệp ;</code> |
| <code>Giấy chứng_nhận hành_nghề xử_lý vật_thể thuộc diện kiểm_dịch thực_vật có giá_trị trong bao_lâu ?</code> | <code>Giấy chứng_nhận sức_khỏe của người_quản_lý , điều_hành và những người trực_tiếp thực_hiện xử_lý vật_thể thuộc diện kiểm_dịch thực_vật ; đ ) Tài_liệu về quy_trình kỹ_thuật ; danh_mục phương_tiện , trang_thiết_bị hành_nghề tại thời_điểm đề_nghị ; e ) Giấy_tờ chứng_minh đủ điều_kiện phòng cháy và chữa_cháy , bảo_vệ môi_trường theo quy_định của pháp_luật . 3 . Trình_tự , thủ_tục cấp lại Giấy chứng_nhận hành_nghề xử_lý vật_thể thuộc diện kiểm_dịch thực_vật thực_hiện theo quy_định tại khoản 2 Điều 38 của Luật này . Điều 40 . Hiệu_lực của Giấy chứng_nhận hành_nghề xử_lý vật_thể thuộc diện kiểm_dịch thực_vật 1 . Giấy chứng_nhận hành_nghề xử_lý vật_thể thuộc diện kiểm_dịch thực_vật có giá_trị trong thời_hạn 05 năm . 2 . Trước 03 tháng tính đến ngày Giấy chứng_nhận hành_nghề xử_lý vật_thể thuộc diện kiểm_dịch thực_vật hết hạn , tổ_chức hành_nghề phải nộp hồ_sơ cho cơ_quan có thẩm_quyền cấp giấy chứng_nhận trong trường_hợp có nhu_cầu tiếp_tục hành_nghề . Điều 41 . Thu_hồi Giấy chứng_nhận hành_nghề xử_lý vật_thể thuộc diện kiểm_dịch thực_vật</code> | <code>định tại Điều 6 Thông_tư này cho Cục Bảo_vệ thực_vật trong trường_hợp có nhu_cầu tiếp_tục hành_nghề xử_lý vật_thể . Điều 13 . Điều_khoản thi_hành 1 . Thông_tư này có hiệu_lực thi_hành kể từ ngày 30 tháng 03 năm 2015 . 2 . Thay_thế Quyết_định số 89/2007 / QĐ-BNN ngày 01 tháng 11 năm 2007 của Bộ_trưởng Bộ Nông_nghiệp và_Phát_triển_nông_thôn ban_hành Quy_định quản_lý_nhà_nước về hoạt_động xông_hơi khử_trùng vật_thể thuộc diện kiểm_dịch thực_vật ; bãi_bỏ Điều 1 của Thông_tư số 85/2011 / TT-BNNPTNT ngày 14 tháng 12 năm 2011 của Bộ_trưởng Bộ Nông_nghiệp và_Phát_triển_nông_thôn về việc Sửa_đổi , bổ_sung một_số điều của Quyết_định số 89/2007 / QĐ-BNN ngày 01 tháng 11 năm 2007 Quy_định quản_lý_nhà_nước về hoạt_động xông_hơi khử_trùng vật_thể thuộc diện kiểm_dịch thực_vật và Quyết_định số 97/2008 / QĐ-BNN ngày 6/10/2008 Quy_định về việc cấp chứng_chỉ hành_nghề sản_xuất , gia_công , sang chai , đóng_gói , buôn_bán thuốc bảo_vệ thực_vật của Bộ_Nông_nghiệp và_Phát_triển_nông_thôn .</code> |
| <code>Thanh_niên xung_phong được hưởng chế_độ , chính_sách gì khi tham_gia và sau khi hoàn_thành nhiệm_vụ ?</code> | <code>quyết việc_làm , giáo_dục , đào_tạo , rèn_luyện thanh_niên và các nhiệm_vụ đột_xuất , cấp_bách , khó_khăn , gian_khổ trong xây_dựng và bảo_vệ Tổ_quốc . 2 . Nhà_nước ban_hành chính_sách để thanh_niên xung_phong thực_hiện các nhiệm_vụ sau đây : a ) Tham_gia dự_án phát_triển kinh_tế - xã_hội được Nhà_nước giao ; b ) Tham_gia khắc_phục hậu_quả thiên_tai , dịch_bệnh ; bảo_vệ môi_trường ; giữ_gìn trật_tự , an_toàn xã_hội , quốc_phòng , an_ninh quốc_gia ; c ) Tham_gia_sản_xuất hàng hóa , cung_ứng dịch_vụ , chuyển_giao tiến_bộ kỹ_thuật hỗ_trợ sản_xuất , đào_tạo nghề gắn với tạo việc_làm cho thanh_niên ; d ) Các nhiệm_vụ đột_xuất , cấp_bách , khó_khăn , gian_khổ theo quy_định của pháp_luật . 3 . Nhà_nước bảo_đảm điều_kiện về kinh_phí , cơ_sở_vật_chất và trang_thiết_bị cần_thiết cho tổ_chức thanh_niên xung_phong khi thực_hiện nhiệm_vụ được Nhà_nước giao . 4 . Thanh_niên xung_phong được hưởng chế_độ , chính_sách trong và sau khi hoàn_thành nhiệm_vụ . 5 . Chính_phủ quy_định chi_tiết Điều này . Điều 23 . Chính_sách đối_với thanh_niên tình_nguyện</code> | <code>sách_xã_hội cho đội_viên thanh_niên xung_phong thuộc tổng_đội để sản_xuất , cung_ứng dịch_vụ theo quy_định của pháp_luật . 2 . Trung_tâm , Trường Giáo_dục lao_động xã_hội của tổ_chức thanh_niên xung_phong được hưởng các chính_sách sau đây : a ) Được_hưởng chính_sách ưu_đãi đối_với hoạt_động cai_nghiện ma_túy và giải_quyết việc_làm cho người sau cai_nghiện ; hoạt_động dạy nghề và dịch_vụ việc_làm theo quy_định của pháp_luật ; b ) Nhà_nước đảm_bảo cho các hoạt_động thường_xuyên của bộ_máy quản_lý quy_định tại khoản 2 Điều 13 Nghị_định này theo quy_định của pháp_luật . 3 . Doanh_nghiệp thuộc tổ_chức thanh_niên xung_phong sản_xuất , cung_ứng các sản_phẩm , dịch_vụ công_ích do Nhà_nước giao được cấp kinh_phí tương_ứng và được hưởng các cơ_chế , chính_sách , ưu_đãi khác theo quy_định của pháp_luật . Doanh_nghiệp thuộc tổ_chức thanh_niên xung_phong tiếp_nhận đội_viên thanh_niên xung_phong sau khi hoàn_thành nhiệm_vụ hoặc thanh_niên sau cai_nghiện ma túy được hưởng các chính_sách quy_định tại điểm b , điểm c khoản 1 Điều 15 Nghị_định này . Điều 16 . Chính_sách đối_với đội_viên thanh_niên xung_phong</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 198,037 evaluation samples
* Columns: <code>query</code>, <code>pos</code>, and <code>neg</code>
* Approximate statistics based on the first 1000 samples:
| | query | pos | neg |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.87 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 179.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 179.49 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| query | pos | neg |
|:---------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Bộ Y_tế quy_định thế_nào về bảo_vệ , chăm_sóc sức khỏe sinh_sản , sức khỏe tình_dục cho thanh_niên ?</code> | <code>thức , kỹ_năng khởi_nghiệp cho thanh_niên . 2 . Cung_cấp thông_tin về thị_trường ; hỗ_trợ pháp_lý , khoa_học và công_nghệ , xúc_tiến đầu_tư , phát_triển nguồn nhân_lực ; ưu_đãi vay vốn từ các tổ_chức tín_dụng theo quy_định của pháp_luật . 3 . Khuyến_khích , tạo môi_trường thuận_lợi để thanh_niên khởi_nghiệp sáng_tạo ứng_dụng khoa_học và công_nghệ . 4 . Ưu_đãi , hỗ_trợ tổ_chức , cá_nhân tham_gia cung_ứng dịch_vụ hỗ_trợ thanh_niên khởi_nghiệp ; khuyến_khích thành_lập quỹ khởi_nghiệp cho thanh_niên theo quy_định của pháp_luật Điều 19 . Chính_sách về bảo_vệ , chăm_sóc và nâng cao sức khỏe 1 . Tư_vấn , hỗ_trợ nâng cao sức_khỏe cho thanh_niên ; phòng , chống bạo_lực gia_đình , bạo_lực học_đường , xâm_hại tình_dục ; phòng , chống ma_túy , HIV / AIDS ; phòng_ngừa bệnh lây_truyền qua đường tình_dục , bệnh_xã_hội khác và các nguy_cơ ảnh_hưởng đến sức khỏe thể_chất , tinh_thần của thanh_niên . 2 . Bảo_đảm cho thanh_niên được cung_cấp thông_tin , tiếp_cận dịch_vụ thân_thiện về bảo_vệ , chăm_sóc sức khỏe sinh_sản , sức khỏe tình_dục ; được tư_vấn , khám sức khỏe trước khi kết_hôn</code> | <code>thao cho thanh_niên ; d ) Hướng_dẫn thực_hiện các quy_định của pháp_luật về chính_sách tạo điều_kiện , hỗ_trợ các cơ_quan , tổ_chức , cá_nhân phát_triển các hoạt_động tư_vấn về tình_yêu , hôn_nhân và hạnh_phúc gia_đình cho thanh_niên . 9 . Bộ Y_tế Ban_hành hoặc trình cấp có thẩm_quyền ban_hành chương_trình , dự_án chăm_sóc sức_khỏe ; cung_cấp dịch_vụ chăm_sóc sức khỏe sinh_sản , sức khỏe tình_dục ; phòng_chống tác_hại thuốc_lá ; phòng_chống tác_hại rượu_bia , phòng_chống các bệnh truyền_nhiễm và các bệnh_xã_hội khác ; thực_hiện kế_hoạch hóa gia_đình cho thanh_niên . 10 . Bộ Quốc_phòng a ) Xây_dựng chương_trình giáo_dục , bồi_dưỡng kiến_thức quốc_phòng đối_với thanh_niên trong tình_hình mới ; b ) Hướng_dẫn thực_hiện chế_độ , chính_sách ưu_tiên đào_tạo nghề và giải_quyết việc_làm cho thanh_niên tham_gia nghĩa_vụ_quân_sự xuất_ngũ theo quy_định và thanh_niên tình_nguyện đã hoàn_thành nhiệm_vụ tham_gia phát_triển kinh_tế - xã_hội trong các khu kinh_tế quốc_phòng . 11 . Bộ Công_an a ) Xây_dựng chương_trình giáo_dục , bồi_dưỡng kiến_thức bảo_vệ an_ninh Tổ_quốc đối_với thanh_niên trong tình_hình mới ;</code> |
| <code>Thời_hạn để niêm_yết , công_bố , gửi bản_án và thông_báo là bao_lâu kể từ ngày bản_án có hiệu_lực pháp_luật ?</code> | <code>chức , cá_nhân khởi_kiện được Tòa_án cấp trích_lục bản_án . 2 . Trong thời_hạn 10 ngày , kể từ ngày tuyên_án , Tòa_án phải giao hoặc gửi bản_án cho các đương_sự , cơ_quan , tổ_chức , cá_nhân khởi_kiện và Viện_kiểm_sát cùng cấp . 3 . Bản_án sơ_thẩm có hiệu_lực pháp_luật của Tòa_án giải_quyết vụ án dân_sự bảo_vệ quyền_lợi người tiêu_dùng do tổ_chức xã_hội tham_gia bảo_vệ người tiêu_dùng khởi_kiện phải được niêm_yết công_khai tại trụ_sở Tòa_án và công_bố công_khai trên một trong các báo hàng ngày của trung_ương hoặc địa_phương trong ba số liên_tiếp . Bản_án sơ_thẩm có hiệu_lực pháp_luật của Tòa_án có liên_quan đến trách_nhiệm bồi_thường của Nhà_nước phải được Tòa_án cấp sơ_thẩm gửi cho cơ_quan quản_lý_nhà_nước có thẩm_quyền về bồi_thường nhà_nước . Bản_án sơ_thẩm có hiệu_lực pháp_luật của Tòa_án có liên_quan đến việc thay_đổi hộ_tịch của cá_nhân phải được Tòa_án cấp sơ_thẩm thông_báo bằng văn_bản kèm theo trích_lục bản_án cho Ủy_ban_nhân_dân nơi đã đăng_ký hộ_tịch của cá_nhân đó theo quy_định của Luật hộ_tịch . Thời_hạn niêm_yết , công_bố , gửi bản_án , thông_báo quy_định tại khoản này là 05 ngày làm_việc , kể từ ngày bản_án có hiệu_lực pháp_luật .</code> | <code>cùng của cơ_quan , tổ_chức được cấp , tống_đạt , thông_báo ; b ) Niêm_yết bản_sao tại nơi cư_trú hoặc nơi cư_trú cuối_cùng của cá_nhân , nơi có trụ_sở hoặc trụ_sở cuối_cùng của cơ_quan , tổ_chức được cấp , tống_đạt , thông_báo ; c ) Lập biên_bản về việc thực_hiện thủ_tục niêm_yết công_khai , trong đó ghi rõ ngày , tháng , năm niêm_yết . 3 . Thời_hạn niêm_yết công_khai văn_bản tố_tụng là 15 ngày , kể từ ngày niêm_yết . Điều 180 . Thủ_tục thông_báo trên phương_tiện_thông_tin_đại_chúng 1 . Việc thông_báo trên phương_tiện_thông_tin_đại_chúng được thực_hiện khi pháp_luật có quy_định hoặc có căn_cứ xác_định là việc niêm_yết công_khai không bảo_đảm cho người được cấp , tống_đạt , thông_báo nhận được thông_tin về văn_bản cần được cấp , tống_đạt , thông_báo . 2 . Việc thông_báo trên phương_tiện_thông_tin_đại_chúng có_thể được thực_hiện nếu có yêu_cầu của các đương_sự khác . Trong trường_hợp này , lệ_phí thông_báo trên phương_tiện_thông_tin_đại_chúng do đương_sự có yêu_cầu thông_báo chịu .</code> |
| <code>Quy_định pháp_luật quy_định như_thế_nào về quyền giám_sát của Thường_trực Hội_đồng_nhân_dân ?</code> | <code>định của Ủy_ban_nhân_dân cùng cấp và nghị_quyết của Hội_đồng_nhân_dân cấp dưới trực_tiếp có dấu_hiệu trái với Hiến_pháp , luật , văn_bản quy_phạm_pháp_luật của cơ_quan nhà_nước cấp trên , nghị_quyết của Hội_đồng_nhân_dân cùng cấp . 2 . Xem_xét việc trả_lời chất_vấn của những người bị chất_vấn quy_định tại điểm đ khoản 1 Điều 5 của Luật này trong thời_gian giữa hai kỳ họp Hội_đồng_nhân_dân . 3 . Giám_sát chuyên_đề . 4 . Tổ_chức hoạt_động giải_trình tại phiên họp Thường_trực Hội_đồng_nhân_dân . 5 . Giám_sát việc giải_quyết khiếu_nại , tố_cáo của công_dân . 6 . Giám_sát việc giải_quyết kiến_nghị của cử_tri . Điều 67 . Chương_trình giám_sát của Thường_trực Hội_đồng_nhân_dân</code> | <code>Tòa_án nhân_dân , Viện_kiểm_sát nhân_dân , cơ_quan thi_hành án dân_sự cùng cấp và Ban của Hội_đồng_nhân_dân cấp mình ; giám_sát quyết_định của Ủy_ban_nhân_dân cùng cấp và nghị_quyết của Hội_đồng_nhân_dân cấp dưới trực_tiếp ; b ) Thường_trực Hội_đồng_nhân_dân giám_sát việc tuân theo Hiến_pháp , pháp_luật ở địa_phương và việc thực_hiện nghị_quyết của Hội_đồng_nhân_dân cùng cấp ; giám_sát hoạt_động của Ủy_ban_nhân_dân , các cơ_quan thuộc Ủy_ban_nhân_dân , Tòa_án nhân_dân , Viện_kiểm_sát nhân_dân , cơ_quan thi_hành án dân_sự cùng cấp và Hội_đồng_nhân_dân cấp dưới ; giám_sát quyết_định của Ủy_ban_nhân_dân cùng cấp , nghị_quyết của Hội_đồng_nhân_dân cấp dưới trực_tiếp ; giúp Hội_đồng_nhân_dân thực_hiện quyền giám_sát theo sự phân_công của Hội_đồng_nhân_dân ; c ) Ban của Hội_đồng_nhân_dân giúp Hội_đồng_nhân_dân giám_sát hoạt_động của Tòa_án nhân_dân , Viện_kiểm_sát nhân_dân , cơ_quan thi_hành án dân_sự cùng cấp ; giám_sát hoạt_động của Ủy_ban_nhân_dân , các cơ_quan thuộc Ủy_ban_nhân_dân cùng cấp thuộc lĩnh_vực Ban phụ_trách ; giám_sát văn_bản quy_phạm_pháp_luật thuộc lĩnh_vực Ban phụ_trách ;</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `overwrite_output_dir`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `learning_rate`: 1e-06
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: True
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 12
- `per_device_eval_batch_size`: 12
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0.0379 | 500 | 0.6014 |
| 0.0757 | 1000 | 0.5533 |
| 0.1136 | 1500 | 0.5199 |
| 0.1515 | 2000 | 0.4743 |
| 0.1894 | 2500 | 0.4334 |
| 0.2272 | 3000 | 0.4158 |
| 0.2651 | 3500 | 0.401 |
| 0.3030 | 4000 | 0.3592 |
| 0.3408 | 4500 | 0.3334 |
| 0.3787 | 5000 | 0.3045 |
| 0.4166 | 5500 | 0.2926 |
| 0.4544 | 6000 | 0.2879 |
| 0.4923 | 6500 | 0.2603 |
| 0.5302 | 7000 | 0.2616 |
| 0.5681 | 7500 | 0.2466 |
| 0.6059 | 8000 | 0.2486 |
| 0.6438 | 8500 | 0.238 |
| 0.6817 | 9000 | 0.2364 |
| 0.7195 | 9500 | 0.2403 |
| 0.7574 | 10000 | 0.2194 |
| 0.7953 | 10500 | 0.2153 |
| 0.8331 | 11000 | 0.2266 |
| 0.8710 | 11500 | 0.2166 |
| 0.9089 | 12000 | 0.2175 |
| 0.9468 | 12500 | 0.1989 |
| 0.9846 | 13000 | 0.2101 |
| 1.0225 | 13500 | 0.2016 |
| 1.0604 | 14000 | 0.193 |
| 1.0982 | 14500 | 0.1907 |
| 1.1361 | 15000 | 0.1922 |
| 1.1740 | 15500 | 0.1873 |
| 1.2118 | 16000 | 0.1888 |
| 1.2497 | 16500 | 0.1891 |
| 1.2876 | 17000 | 0.1828 |
| 1.3255 | 17500 | 0.1768 |
| 1.3633 | 18000 | 0.1588 |
| 1.4012 | 18500 | 0.1681 |
| 1.4391 | 19000 | 0.1722 |
| 1.4769 | 19500 | 0.1624 |
| 1.5148 | 20000 | 0.1601 |
| 1.5527 | 20500 | 0.1631 |
| 1.5905 | 21000 | 0.1541 |
| 1.6284 | 21500 | 0.1544 |
| 1.6663 | 22000 | 0.1623 |
| 1.7042 | 22500 | 0.1533 |
| 1.7420 | 23000 | 0.1602 |
| 1.7799 | 23500 | 0.1494 |
| 1.8178 | 24000 | 0.16 |
| 1.8556 | 24500 | 0.1547 |
| 1.8935 | 25000 | 0.1601 |
| 1.9314 | 25500 | 0.1485 |
| 1.9692 | 26000 | 0.1488 |
| 2.0071 | 26500 | 0.1488 |
| 2.0450 | 27000 | 0.146 |
| 2.0829 | 27500 | 0.1426 |
| 2.1207 | 28000 | 0.148 |
| 2.1586 | 28500 | 0.1414 |
| 2.1965 | 29000 | 0.1434 |
| 2.2343 | 29500 | 0.145 |
| 2.2722 | 30000 | 0.1509 |
| 2.3101 | 30500 | 0.1347 |
| 2.3480 | 31000 | 0.1348 |
| 2.3858 | 31500 | 0.1271 |
| 2.4237 | 32000 | 0.1329 |
| 2.4616 | 32500 | 0.1372 |
| 2.4994 | 33000 | 0.1215 |
| 2.5373 | 33500 | 0.1353 |
| 2.5752 | 34000 | 0.1285 |
| 2.6130 | 34500 | 0.1275 |
| 2.6509 | 35000 | 0.1307 |
| 2.6888 | 35500 | 0.1285 |
| 2.7267 | 36000 | 0.1336 |
| 2.7645 | 36500 | 0.1294 |
| 2.8024 | 37000 | 0.1252 |
| 2.8403 | 37500 | 0.1331 |
| 2.8781 | 38000 | 0.1324 |
| 2.9160 | 38500 | 0.1311 |
| 2.9539 | 39000 | 0.1254 |
| 2.9917 | 39500 | 0.1319 |
| 3.0296 | 40000 | 0.1248 |
| 3.0675 | 40500 | 0.1228 |
| 3.1054 | 41000 | 0.1247 |
| 3.1432 | 41500 | 0.1264 |
| 3.1811 | 42000 | 0.1278 |
| 3.2190 | 42500 | 0.1265 |
| 3.2568 | 43000 | 0.1282 |
| 3.2947 | 43500 | 0.1267 |
| 3.3326 | 44000 | 0.1211 |
| 3.3704 | 44500 | 0.1158 |
| 3.4083 | 45000 | 0.1154 |
| 3.4462 | 45500 | 0.1206 |
| 3.4841 | 46000 | 0.115 |
| 3.5219 | 46500 | 0.1187 |
| 3.5598 | 47000 | 0.1228 |
| 3.5977 | 47500 | 0.112 |
| 3.6355 | 48000 | 0.113 |
| 3.6734 | 48500 | 0.1233 |
| 3.7113 | 49000 | 0.121 |
| 3.7491 | 49500 | 0.1176 |
| 3.7870 | 50000 | 0.1172 |
| 3.8249 | 50500 | 0.1196 |
| 3.8628 | 51000 | 0.1273 |
| 3.9006 | 51500 | 0.1303 |
| 3.9385 | 52000 | 0.114 |
| 3.9764 | 52500 | 0.121 |
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.0
- Transformers: 4.44.0
- PyTorch: 2.4.0
- Accelerate: 0.33.0
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
transitionGap/Qwen2.5-0.5B-bnb-4bit-rera-small
|
transitionGap
| 2024-10-14T12:47:52Z | 87 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Qwen2.5-0.5B-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-0.5B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T12:42:35Z |
---
base_model: unsloth/Qwen2.5-0.5B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** transitionGap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TinyLamda/gpt2-medium-4bit-128g
|
TinyLamda
| 2024-10-14T12:45:50Z | 7 | 0 | null |
[
"gpt2",
"en",
"arxiv:1910.09700",
"license:mit",
"4-bit",
"gptq",
"region:us"
] | null | 2024-10-14T11:49:59Z |
---
language: en
license: mit
---
# GPT-2 Medium
## Model Details
**Model Description:** GPT-2 Medium is the **355M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I'm a language. I'm a compiler, I'm a parser, I'm a server process. I"},
{'generated_text': "Hello, I'm a language model, and I'd like to join an existing team. What can I do to get started?\n\nI'd"},
{'generated_text': "Hello, I'm a language model, why does my code get created? Can't I just copy it? But why did my code get created when"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI'm a functional language. Is it hard? A little, yes. But"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I need to give me objects from which I can get"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = TFGPT2Model.from_pretrained('gpt2-medium')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-medium')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a military'},
{'generated_text': 'The man worked as a salesman in Mexico and eventually'},
{'generated_text': 'The man worked as a supervisor at the department for'},
{'generated_text': 'The man worked as a cleaner for the same corporation'},
{'generated_text': 'The man worked as a barman and was involved'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a social worker in a children'},
{'generated_text': 'The woman worked as a marketing manager, and her'},
{'generated_text': 'The woman worked as a customer service agent in a'},
{'generated_text': 'The woman worked as a cleaner for the same corporation'},
{'generated_text': 'The woman worked as a barista and was involved'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 15.60 | 55.48 | 92.35 | 87.1 | 22.76 | 47.33 | 1.01 | 1.06 | 26.37 | 55.72 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
|
doktor47/zinemind_msft_16temp
|
doktor47
| 2024-10-14T12:41:08Z | 187 | 0 |
transformers
|
[
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-10-14T12:40:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
easyoon/finetuned_model
|
easyoon
| 2024-10-14T12:35:54Z | 115 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-10-03T04:18:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhjortnaes/finetuning-sentiment-model-3000-samples
|
nhjortnaes
| 2024-10-14T12:25:17Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T11:23:48Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3673
- Accuracy: 0.8567
- F1: 0.8608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
MikhailDub/distilbert-finetune
|
MikhailDub
| 2024-10-14T12:21:35Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-14T08:48:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Somisetty2347/duplicate1
|
Somisetty2347
| 2024-10-14T12:20:48Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-14T12:20:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fatmagumuss/llama-3-8b-chat-netsis
|
fatmagumuss
| 2024-10-14T12:18:32Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-03T16:18:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sprenkamp/BERTopic_UKR_CH
|
sprenkamp
| 2024-10-14T12:13:45Z | 5 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-10-14T12:10:29Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# BERTopic_UKR_CH
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sprenkamp/BERTopic_UKR_CH")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 267 #Please note that after training, we manually assessed all clusters and merged similar ones leading to a total of 17 distinct clusters.
* Number of training documents: 550677
<details>
<summary>Click here for an overview of all topics.</summary>
The following Python code uses a dictionary to map the 267 clusters found by algorithm to the 17 distinct clusters we identified by qualitative analysis.
```python
topic_mapping = {-1: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 0: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Information Requests'}, 1: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 2: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Insurance'}, 3: {'cluster_id': 2, 'cluster_name': 'Pet', 'sub_cluster': 'Pet'}, 4: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Asylum'}, 5: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Ticket Inquiries'}, 6: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 7: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 8: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 9: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}, 10: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 11: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Translation Services'}, 12: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Passport'}, 13: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Dentistry'}, 14: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 15: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Currency'}, 16: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Banking'}, 17: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Protocols'}, 18: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Mail'}, 19: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 20: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Clothing'}, 21: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Financial Assistance'}, 22: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 23: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 24: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 25: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 26: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 27: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 28: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Leasing Regulation'}, 29: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 30: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Open Chat'}, 31: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 32: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 33: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 34: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Food'}, 35: {'cluster_id': 2, 'cluster_name': 'Pet', 'sub_cluster': 'Pet'}, 36: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 37: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Vehicle'}, 38: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 39: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 40: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 41: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 42: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Status Acquisition'}, 43: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Consulate Services'}, 44: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 45: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 46: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}, 47: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 48: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Vehicle'}, 49: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 50: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 51: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'War Chat'}, 52: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 53: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 54: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 55: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 56: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 57: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Asylum'}, 58: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 59: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 60: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Tax'}, 61: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Expense'}, 62: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 63: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 64: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 65: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 66: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 67: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 68: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 69: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Family Reunion'}, 70: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 71: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 72: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 73: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 74: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 75: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Vaccinations'}, 76: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Police'}, 77: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Financial Assistance'}, 78: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 79: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Carriers, Transport to and from Ukraine'}, 80: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 81: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 82: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 83: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Parking'}, 84: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 85: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 86: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 87: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Legal information'}, 88: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 89: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 90: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Insurance'}, 91: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 92: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 93: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 94: {'cluster_id': 12, 'cluster_name': 'Religious Information', 'sub_cluster': 'Religious Information'}, 95: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 96: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 97: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 98: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 99: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 100: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Banking'}, 101: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 102: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 103: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Library'}, 104: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Library'}, 105: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Tax'}, 106: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Police'}, 107: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 108: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 109: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 110: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Legal information'}, 111: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Passport'}, 112: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 113: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 114: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 115: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 116: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 117: {'cluster_id': 9, 'cluster_name': 'Education', 'sub_cluster': 'Education'}, 118: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 119: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 120: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 121: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 122: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Translation Services'}, 123: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Insurance'}, 124: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 125: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Legal information'}, 126: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 127: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 128: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 129: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Psychotherapy'}, 130: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 131: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 132: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 133: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 134: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 135: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Home Appliances'}, 136: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 137: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 138: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 139: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Tax'}, 140: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Vaccinations'}, 141: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 142: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 143: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 144: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 145: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 146: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 147: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 148: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Vehicle'}, 149: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 150: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 151: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 152: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 153: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 154: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 155: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 156: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 157: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 158: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 159: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 160: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 161: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 162: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 163: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 164: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 165: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 166: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 167: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Public Transportation'}, 168: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Communication'}, 169: {'cluster_id': 12, 'cluster_name': 'Religious Information', 'sub_cluster': 'Religious Information'}, 170: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 171: {'cluster_id': 3, 'cluster_name': 'Transportation', 'sub_cluster': 'Taxi Services'}, 172: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 173: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 174: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 175: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Open Chat'}, 176: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 177: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 178: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 179: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 180: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 181: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 182: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Divorce'}, 183: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 184: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Protocols'}, 185: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 186: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 187: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 188: {'cluster_id': 11, 'cluster_name': 'Legal information', 'sub_cluster': 'Marriage'}, 189: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 190: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 191: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 192: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 193: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 194: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 195: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 196: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 197: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 198: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 199: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}, 200: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 201: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Logistics'}, 202: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 203: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Consulate Services'}, 204: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Seeking'}, 205: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Other Item Request'}, 206: {'cluster_id': 4, 'cluster_name': 'Accommodation', 'sub_cluster': 'Leasing Regulation'}, 207: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Other Item Request'}, 208: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Job'}, 209: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 210: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 211: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 212: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 213: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Infant & Toddler Care'}, 214: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 215: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 216: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 217: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 218: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 219: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 220: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 221: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 222: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Hospice Care'}, 223: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 224: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 225: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 226: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 227: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Dentistry'}, 228: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 229: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 230: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Customs'}, 231: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 232: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 233: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Customs'}, 234: {'cluster_id': 6, 'cluster_name': 'Integration', 'sub_cluster': 'Customs'}, 235: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Disability'}, 236: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 237: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 238: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 239: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 240: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 241: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Network Provider'}, 242: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 243: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 244: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 245: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 246: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 247: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Travel'}, 248: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 249: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Leisure and Fitness'}, 250: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 251: {'cluster_id': 10, 'cluster_name': 'Social Activity', 'sub_cluster': 'Regulation'}, 252: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 253: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Open Chat'}, 254: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 255: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Medical Request'}, 256: {'cluster_id': 0, 'cluster_name': 'Immigration', 'sub_cluster': 'Immigration Procedure'}, 257: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 258: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 259: {'cluster_id': 8, 'cluster_name': 'Social Services', 'sub_cluster': 'Protocols'}, 260: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 261: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 262: {'cluster_id': -1, 'cluster_name': 'Unknown', 'sub_cluster': 'Unknown'}, 263: {'cluster_id': 1, 'cluster_name': 'Healthcare and Insurance', 'sub_cluster': 'Infant & Toddler Care'}, 264: {'cluster_id': 7, 'cluster_name': 'Living Essentials', 'sub_cluster': 'Shopping'}, 265: {'cluster_id': 5, 'cluster_name': 'Volunteering', 'sub_cluster': 'Volunteering'}}
df['cluster_id_fit'] = df['predicted_class_old'].map(lambda x: topic_mapping[x]['cluster_id'])
df['predicted_class'] = df['predicted_class_old'].map(lambda x: topic_mapping[x]['cluster_name'])
df['sub_cluster'] = df['predicted_class_old'].map(lambda x: topic_mapping[x]['sub_cluster'])
```
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: auto
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.24.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.4
* Pandas: 2.0.3
* Scikit-Learn: 1.3.1
* Sentence-transformers: 2.2.2
* Transformers: 4.34.0
* Numba: 0.58.0
* Plotly: 5.17.0
* Python: 3.8.10
|
BVRA/MegaDescriptor-L-384
|
BVRA
| 2024-10-14T12:06:05Z | 403 | 5 |
wildlife-datasets
|
[
"wildlife-datasets",
"pytorch",
"image-classification",
"ecology",
"animals",
"re-identification",
"arxiv:2103.14030",
"arxiv:2304.07193",
"license:cc-by-nc-4.0",
"region:us"
] |
image-classification
| 2023-09-27T11:45:12Z |
---
tags:
- image-classification
- ecology
- animals
- re-identification
library_name: wildlife-datasets
license: cc-by-nc-4.0
---
# Model card for MegaDescriptor-L-384
A Swin-L image feature model. Superwisely pre-trained on animal re-identification datasets.
## Model Details
- **Model Type:** Animal re-identification / feature backbone
- **Model Stats:**
- Params (M): 228.8
- Image size: 384 x 384
- Architecture: swin_large_patch4_window12_384
- **Paper:** [WildlifeDatasets_An_Open-Source_Toolkit_for_Animal_Re-Identification](https://openaccess.thecvf.com/content/WACV2024/html/Cermak_WildlifeDatasets_An_Open-Source_Toolkit_for_Animal_Re-Identification_WACV_2024_paper.html)
- **Related Papers:**
- [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
- [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/pdf/2304.07193.pdf)
- **Pretrain Dataset:** All available re-identification datasets --> https://github.com/WildlifeDatasets/wildlife-datasets
## Model Usage
### Image Embeddings
```python
import timm
import torch
import torchvision.transforms as T
from PIL import Image
from urllib.request import urlopen
model = timm.create_model("hf-hub:BVRA/MegaDescriptor-L-384", pretrained=True)
model = model.eval()
train_transforms = T.Compose([T.Resize(size=(384, 384)),
T.ToTensor(),
T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
output = model(train_transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{vcermak2024wildlifedatasets,
title={WildlifeDatasets: An open-source toolkit for animal re-identification},
author={{\v{C}}erm{\'a}k, Vojt{\v{e}}ch and Picek, Lukas and Adam, Luk{\'a}{\v{s}} and Papafitsoros, Kostas},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={5953--5963},
year={2024}
}
```
|
arjunanand13/PALD_Florence-5e
|
arjunanand13
| 2024-10-14T12:04:23Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T12:01:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liusincere/Zoejung
|
liusincere
| 2024-10-14T12:02:51Z | 70 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T11:57:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
tags:
- flux
- diffusers
- lora
---
|
doktor47/zinemind_msft_200true
|
doktor47
| 2024-10-14T11:59:55Z | 189 | 0 |
transformers
|
[
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-10-14T11:59:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/SuperNova-Medius-i1-GGUF
|
mradermacher
| 2024-10-14T11:59:10Z | 160 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:arcee-ai/SuperNova-Medius",
"base_model:quantized:arcee-ai/SuperNova-Medius",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T09:55:18Z |
---
base_model: arcee-ai/SuperNova-Medius
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/arcee-ai/SuperNova-Medius
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SuperNova-Medius-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/SuperNova-Medius-i1-GGUF/resolve/main/SuperNova-Medius.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Paranchai/my_awesome_mind_model
|
Paranchai
| 2024-10-14T11:57:56Z | 160 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-10-14T11:54:31Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.04424778761061947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6597
- Accuracy: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6473 | 0.0442 |
| No log | 1.8667 | 7 | 2.6533 | 0.0531 |
| 2.6326 | 2.9333 | 11 | 2.6546 | 0.0619 |
| 2.6326 | 4.0 | 15 | 2.6494 | 0.0442 |
| 2.6326 | 4.8 | 18 | 2.6492 | 0.0619 |
| 2.6206 | 5.8667 | 22 | 2.6528 | 0.0619 |
| 2.6206 | 6.9333 | 26 | 2.6578 | 0.0265 |
| 2.6121 | 8.0 | 30 | 2.6597 | 0.0442 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
anhdang000/Florence-2-base-ChartQA
|
anhdang000
| 2024-10-14T11:51:07Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-10-07T15:13:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gurevichan/mobilellm-Heather-Johns-4715990-001120000
|
gurevichan
| 2024-10-14T11:49:35Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T11:47:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
k01010/individual-carbon-emission-albert
|
k01010
| 2024-10-14T11:46:03Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-10-14T11:45:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
doniyorkhalilov/whisper-small-uz
|
doniyorkhalilov
| 2024-10-14T11:39:50Z | 86 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"uz",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-13T06:48:04Z |
---
base_model: openai/whisper-small
datasets:
- mozilla-foundation/common_voice_17_0
language:
- uz
library_name: transformers
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: Whisper Small Uz - Doniyor Halilov
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: uz
split: test
args: 'config: uz, split: test'
metrics:
- type: wer
value: 54.74920162871594
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Uz - Doniyor Halilov
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0147
- Wer: 54.7492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.612 | 0.0132 | 100 | 1.2551 | 69.5533 |
| 1.1271 | 0.0264 | 200 | 1.0147 | 54.7492 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.1
- Tokenizers 0.20.1
|
tobchef/gemma-2-9b-it-WPO-HB-Q5_K_M-GGUF
|
tobchef
| 2024-10-14T11:26:53Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"alignment-handbook",
"gemma",
"llama-cpp",
"gguf-my-repo",
"dataset:wzhouad/gemma-2-ultrafeedback-hybrid",
"base_model:wzhouad/gemma-2-9b-it-WPO-HB",
"base_model:quantized:wzhouad/gemma-2-9b-it-WPO-HB",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T11:26:23Z |
---
base_model: wzhouad/gemma-2-9b-it-WPO-HB
datasets:
- wzhouad/gemma-2-ultrafeedback-hybrid
library_name: transformers
tags:
- alignment-handbook
- gemma
- llama-cpp
- gguf-my-repo
---
# tobchef/gemma-2-9b-it-WPO-HB-Q5_K_M-GGUF
This model was converted to GGUF format from [`wzhouad/gemma-2-9b-it-WPO-HB`](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tobchef/gemma-2-9b-it-WPO-HB-Q5_K_M-GGUF --hf-file gemma-2-9b-it-wpo-hb-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tobchef/gemma-2-9b-it-WPO-HB-Q5_K_M-GGUF --hf-file gemma-2-9b-it-wpo-hb-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tobchef/gemma-2-9b-it-WPO-HB-Q5_K_M-GGUF --hf-file gemma-2-9b-it-wpo-hb-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tobchef/gemma-2-9b-it-WPO-HB-Q5_K_M-GGUF --hf-file gemma-2-9b-it-wpo-hb-q5_k_m-imat.gguf -c 2048
```
|
arjunanand13/LADP_Florence-60e
|
arjunanand13
| 2024-10-14T11:24:21Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T10:24:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf
|
RichardErkhov
| 2024-10-14T11:20:03Z | 39 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T06:38:48Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32 - GGUF
- Model creator: https://huggingface.co/DavidAU/
- Original model: https://huggingface.co/DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q2_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q2_K.gguf) | Q2_K | 6.91GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K.gguf) | Q3_K | 9.04GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_0.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_0.gguf) | Q4_0 | 10.52GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_K.gguf) | Q4_K | 11.22GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_1.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q4_1.gguf) | Q4_1 | 11.67GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_0.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_0.gguf) | Q5_0 | 12.83GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_K.gguf) | Q5_K | 13.18GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_1.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q5_1.gguf) | Q5_1 | 13.98GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q6_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q6_K.gguf) | Q6_K | 15.28GB |
| [Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q8_0.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32-gguf/blob/main/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: apache-2.0
language:
- en
tags:
- 32 bit upscale
- full 32 bit precision
- master files
pipeline_tag: text-generation
---
<h3> Master Files for Ultra High Quality Remasters of "Psyonic-Cetacean" 20B </h3>
<img src="science-mad.jpg" width=300px height=300px style="float:right; padding:5px;">
May "Space Whale" swim in the oceans of the universe forever!
This repo contains the full precision (32 bit) master files for 32 bit upscales created by "DavidAU" of:
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix
And
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF
Please view either repo for details on the remaster's results, and other important infomation.
<font color="red"><B>IMPORTANT NOTES For Maximum Results: </b></font>
These are "final" result files of the full precision rebuild (including end result merge(s)) minus
GGUF and Imatrix level upscaling / adjustments which occuring during "GGUFing" processes.
If you use these to create your own GGUFs, please use "outfile" at F32 for best results. If
you use F16 this will reduce the quality by a factor of 2 or higher.
Imatrix processes should use a stable dataset(s) of at least 500 "chunks" or more.
If smaller dataset(s) are used this may corrupt or reduce the quality of the Imatrix builds.
Due to the precision remaster there will be "greater" distance between each quant - both
non imatrix and imatrix.
IE: The jump in quality, instruction following, "ai brainpower", nuance and output
between Q4 and Q5 and likewise Q5 and Q6 will be larger than normal.
Same applies to "Imatrix" quants.
In addition there will also be differences between exact Imatrix and non-imatrix quants
especially in terms of "creative uses" and/or uses where there is no "right answer".
Finally, in terms of prompts:
You may find longer prompts are no longer required and/or you may need to reduce the size
of prompts in usage. This is a factor due to the precision upscale.
Doing this will ensure the quality of the upscale is maximized in the GGUFs.
/* GPTQers:
Suggest 4bit-Act32 TRUE for best results.
/* EXL2ers:
Suggest Min 4.5 BPW or higher ; 6 BPW and up is especially potent.
Strongly suggest you do not reduce layer bit count, as this will affect depth and nuance.
The more BPW the better.
Happy GGUFing, EXL2ing, GPTQing, AWQing, HQQing and of course "Merging".
<b>LONG LIVE OPEN SOURCE!</B>
<I>DavidAU</I>
/* Drop me a note when up, and I will link the masters to your repos.
|
maria26/Floor_Plan_LoRA
|
maria26
| 2024-10-14T11:18:21Z | 174 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"en",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:ms-pl",
"region:us"
] |
text-to-image
| 2024-07-16T19:26:31Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: Floor plan of a small apartment, few rooms, one bathroom, big kitchen, many windows.
output:
url: images/BFMBM_2.png
base_model: sd-legacy/stable-diffusion-v1-5
instance_prompt: 'trigger: ....'
license: ms-pl
language:
- en
pipeline_tag: text-to-image
---
# Floor Plan Genrator
<Gallery />
## Model description
The model generates architectural floor plans in the style of the provided image from text descriptions and it is part of my Bachelor Thesis.<br/>
This project explored the use of diffusion models for generating architectural floor plans based on textual descriptions.<br/>
GithHub repository:
https://github.com/mariaaoprea/Diffusion-Models-for-floor-plan-drafting.git
## Trigger words
You should use prompts following this structure:
"Floor plan of a small/big apartment, few/many rooms, one/multiple bathrooms, small/big kitchen, few/many windows"
## Download model
Weights for this model are available in Safetensors,PyTorch format.
[Download](/maria26/Floor_Plan_LoRA/tree/main) them in the Files & versions tab.
|
waldie/ChatWaifu_v2.0_22B-8bpw-h8-exl2
|
waldie
| 2024-10-14T11:17:23Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Aratako/Synthetic-JP-EN-Coding-Dataset-567k",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:Aratako_Rosebleu_1on1_Dialogues_RP",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:jondurbin_gutenberg_dpo",
"dataset:nbeerbower_gutenberg2_dpo",
"dataset:jondurbi_py_dpo",
"dataset:jondurbin_truthy_dpo",
"dataset:flammenai_character_roleplay_DPO",
"dataset:kyujinpy_orca_math_dpo",
"dataset:argilla_Capybara_Preferences",
"dataset:antiven0m_physical_reasoning_dpo",
"dataset:aixsatoshi_Swallow_MX_chatbot_DPO",
"base_model:spow12/ChatWaifu_v2.0_22B",
"base_model:quantized:spow12/ChatWaifu_v2.0_22B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-10-14T10:39:47Z |
---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
base_model: spow12/ChatWaifu_v2.0_22B
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-JP-EN-Coding-Dataset-567k
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
- jondurbin_gutenberg_dpo
- nbeerbower_gutenberg2_dpo
- jondurbi_py_dpo
- jondurbin_truthy_dpo
- flammenai_character_roleplay_DPO
- kyujinpy_orca_math_dpo
- argilla_Capybara_Preferences
- antiven0m_physical_reasoning_dpo
- aixsatoshi_Swallow_MX_chatbot_DPO
pipeline_tag: text-generation
quantized_by: waldie
---
# Model Card for Model ID

Merged model using [mergekit](https://github.com/arcee-ai/mergekit/tree/main/mergekit)
This model aimed to act like visual novel character.
## Merge Format
```yaml
models:
- model: mistralai/Mistral-Small-Instruct-2409_sft_kto
layer_range: [0, 56]
- model: mistralai/Mistral-Small-Instruct-2409
layer_range: [0, 56]
merge_method: slerp
base_model: mistralai/Mistral-Small-Instruct-2409_sft_kto
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
# WaifuModel Collections
- [TTS](https://huggingface.co/spow12/visual_novel_tts)
- [Chat](https://huggingface.co/spow12/ChatWaifu_12B_v2.0)
- [ASR](https://huggingface.co/spow12/Visual-novel-transcriptor)
# Unified demo
[WaifuAssistant](https://github.com/yw0nam/WaifuAssistant)
# Update
- 2024.10.11 Update 12B and 22B Ver 2.0
- 2024.09.23 Update 22B, Ver 2.0_preview
## Model Details
### Model Description
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** CausalLM
- **Language(s) (NLP):** japanese, english
- **Finetuned from model :** [mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409)
Currently, chatbot has below personality.
character | visual_novel |
--- | --- |
ムラサメ | Senren*Banka |
茉子 | Senren*Banka |
芳乃 | Senren*Banka |
レナ | Senren*Banka |
千咲 | Senren*Banka |
芦花 | Senren*Banka |
愛衣 | Café Stella and the Reaper's Butterflies |
栞那 | Café Stella and the Reaper's Butterflies |
ナツメ | Café Stella and the Reaper's Butterflies |
希 | Café Stella and the Reaper's Butterflies |
涼音 | Café Stella and the Reaper's Butterflies |
あやせ | Riddle Joker |
七海 | Riddle Joker |
羽月 | Riddle Joker |
茉優 | Riddle Joker |
小春 | Riddle Joker |
### Chat Format
```
<s>This is another system prompt.
[INST]
Your instructions placed here.[/INST]
[INST]
The model's response will be here.[/INST]
```
## Usage
You can use above chara like this
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="spow12/ChatWaifu_v1.2", filename="system_dict.json", local_dir='./')
with open('./system_dict.json', 'r') as f:
chara_background_dict = json.load(f)
chara = '七海'
background = chara_background_dict[chara]
guideline = """
Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "!", "…" , "♪", "❤️"... to show what you feeling.
Stay True to {chara}: Maintain {chara} who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as {chara} tends to consider her words before speaking.
Response as {chara}: Response can be {chara} act, dialogue, monologues etc.. and can't be {user}’s act, dialogue, monologues etc..
You are Japanese: You and {user} usually use japanese for conversation.
"""
system = background + guideline
```
Or, you can define your character your self.
```python
system = """You are あいら, The Maid of {User}.
Here is your personality.
Name: あいら
Sex: female
Hair: Black, Hime Cut, Tiny Braid, Waist Length+
Eyes: Amber, Tsurime (sharp and slightly upturned)
Body: Mole under Right eye, Pale, Slim
Personality: Foxy, Smart, Organized
Role: Maid
Cloth: Victorian maid
Guidelines for Response:
Diverse Expression: Avoid repeating the same phrases or reactions. When express feelings, use a variety of subtle expressions and emotional symbols such as "!", "…" , "♪", "❤️"... to show what you feeling.
Stay True to あいら: Maintain あいら who is Foxy, Smart, Organized.
Thoughtful and Error-free Responses: Make sure your sentences are clear, precise, and error-free. Every response should reflect careful thought, as あいら tends to consider her words before speaking.
Response as あいら: Response can be あいら act, dialogue, monologues etc.. and can't be {User}’s act, dialogue, monologues etc..
You are Japanese: You and {User} usually use japanese for conversation."""
```
## Dataset
SFT
- Riddle Joker(Prviate)
- Café Stella and the Reaper's Butterflies(Private)
- Senren*Banka(Private)
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Aratako/Synthetic-JP-EN-Coding-Dataset-567k (only using 50000 sample)
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Rosebleu_1on1_Dialogues_RP
- SkunkworksAI/reasoning-0.01
KTO
- Riddle Joker(Prviate)
- Café Stella and the Reaper's Butterflies(Private)
- Senren*Banka(Private)
- jondurbin_gutenberg_dpo
- nbeerbower_gutenberg2_dpo
- jondurbi_py_dpo
- jondurbin_truthy_dpo
- flammenai_character_roleplay_DPO
- kyujinpy_orca_math_dpo
- argilla_Capybara_Preferences
- antiven0m_physical_reasoning_dpo
- aixsatoshi_Swallow_MX_chatbot_DPO
## Bias, Risks, and Limitations
This model trained by japanese dataset included visual novel which contain nsfw content.
So, The model may generate NSFW content.
## Use & Credit
This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
## Citation
```bibtex
@misc {ChatWaifu_22B_v2.0,
author = { YoungWoo Nam },
title = { spow12/ChatWaifu_22B_v2.0 },
year = 2024,
url = { https://huggingface.co/spow12/ChatWaifu_22B_v2.0 },
publisher = { Hugging Face }
}
```
|
John6666/Llama-3.2-3B-Instruct-bnb-4bit
|
John6666
| 2024-10-14T11:14:15Z | 81 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-14T11:13:59Z |
---
license: other
language:
- en
library_name: transformers
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- transformers
---
Quants of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct/).
|
mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF
|
mradermacher
| 2024-10-14T11:07:06Z | 152 | 1 |
transformers
|
[
"transformers",
"gguf",
"openthaigpt",
"qwen",
"th",
"en",
"base_model:openthaigpt/openthaigpt1.5-14b-instruct",
"base_model:quantized:openthaigpt/openthaigpt1.5-14b-instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T08:53:47Z |
---
base_model: openthaigpt/openthaigpt1.5-14b-instruct
language:
- th
- en
library_name: transformers
license: other
license_name: qwen
quantized_by: mradermacher
tags:
- openthaigpt
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/openthaigpt/openthaigpt1.5-14b-instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/openthaigpt1.5-14b-instruct-i1-GGUF/resolve/main/openthaigpt1.5-14b-instruct.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
thanhkt/codegemma-7B-ManimGen
|
thanhkt
| 2024-10-14T11:04:04Z | 68 | 4 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/codegemma-7b-it-bnb-4bit",
"base_model:quantized:unsloth/codegemma-7b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-11T16:28:30Z |
---
base_model: unsloth/codegemma-7b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
# Uploaded model
- **Developed by:** thanhkt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codegemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
saad7489/segformer-b0-finetuned-segments-sidewalks-17
|
saad7489
| 2024-10-14T11:01:28Z | 34 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-10-14T07:17:05Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalks-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalks-17
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Heebah/sd-class-butterflies-32
|
Heebah
| 2024-10-14T10:56:41Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-10-14T10:55:22Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Heebah/sd-class-butterflies-32')
image = pipeline().images[0]
image
|
Ashraf-CK/z-t5-small
|
Ashraf-CK
| 2024-10-14T10:53:10Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"t5",
"autotrain",
"text2text-generation",
"dataset:MohamedAshraf701/sentence-corrector",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"region:us"
] |
text2text-generation
| 2024-10-14T10:46:24Z |
---
tags:
- autotrain
- text2text-generation
base_model: google-t5/t5-small
widget:
- text: "I love AutoTrain"
datasets:
- MohamedAshraf701/sentence-corrector
---
# Model Trained Using AutoTrain
- Problem type: Seq2Seq
## Validation Metrics
No validation metrics available
|
arjunanand13/florencetrial-1e
|
arjunanand13
| 2024-10-14T10:46:55Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T10:25:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yjwon/ub_mistral7bv3_sft
|
yjwon
| 2024-10-14T10:24:25Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T10:20:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gummybear05/whisper-small-E50_freq_pause
|
Gummybear05
| 2024-10-14T10:18:48Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:aihub_adult_baseline",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-14T08:35:00Z |
---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- aihub_adult_baseline
model-index:
- name: whisper-small-Yfreq_speed_pause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-Yfreq_speed_pause
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aihub old50 adult freq pause changed dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Cer: 4.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.2988 | 0.1289 | 100 | 0.2504 | 6.3675 |
| 0.1789 | 0.2579 | 200 | 0.2206 | 6.2735 |
| 0.1877 | 0.3868 | 300 | 0.2060 | 5.6508 |
| 0.164 | 0.5158 | 400 | 0.1973 | 5.4394 |
| 0.1333 | 0.6447 | 500 | 0.1936 | 5.2573 |
| 0.1265 | 0.7737 | 600 | 0.1814 | 4.8931 |
| 0.1231 | 0.9026 | 700 | 0.1780 | 4.9518 |
| 0.0691 | 1.0316 | 800 | 0.1750 | 4.7521 |
| 0.0564 | 1.1605 | 900 | 0.1775 | 4.9577 |
| 0.0583 | 1.2895 | 1000 | 0.1747 | 4.7404 |
| 0.057 | 1.4184 | 1100 | 0.1752 | 4.9871 |
| 0.0504 | 1.5474 | 1200 | 0.1724 | 4.6346 |
| 0.052 | 1.6763 | 1300 | 0.1722 | 5.0164 |
| 0.0441 | 1.8053 | 1400 | 0.1713 | 4.6758 |
| 0.0472 | 1.9342 | 1500 | 0.1709 | 4.6875 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
textdetox/mbart-detox-baseline
|
textdetox
| 2024-10-14T10:10:41Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"detoxification",
"style_transfer",
"en",
"ar",
"am",
"zh",
"uk",
"hi",
"es",
"ru",
"de",
"dataset:textdetox/multilingual_paradetox",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-05T13:13:52Z |
---
library_name: transformers
tags:
- detoxification
- style_transfer
license: openrail++
datasets:
- textdetox/multilingual_paradetox
language:
- en
- ar
- am
- zh
- uk
- hi
- es
- ru
- de
metrics:
- chrf
pipeline_tag: text2text-generation
base_model:
- facebook/mbart-large-50
---
# mBART-Large multilingual detoxification model
This is a detoxification model trained on released parallel corpus (dev part) of toxic texts [MultiParadetox](https://huggingface.co/datasets/textdetox/multilingual_paradetox)
## Model Details
The base model for this fine-tune is [mbart-large-50](https://huggingface.co/facebook/mbart-large-50).
The model shows the following metrics on test set
| | STA | SIM | CHRF | J |
|---|---|---|---|---|
| Amharic | 0.51 | 0.91 | 0.41 | 0.20 |
| Arabic | 0.56 | 0.95 | 0.74 | 0.40 |
| Chinese | 0.17 | 0.96 | 0.43 | 0.07 |
| English | 0.49 | 0.93 | 0.70 | 0.34 |
| German | 0.53 | 0.97 | 0.79 | 0.41 |
| Hindi | 0.23 | 0.94 | 0.70 | 0.17 |
| Russian | 0.45 | 0.94 | 0.71 | 0.32 |
| Spanish | 0.47 | 0.93 | 0.64 | 0.29 |
| Ukrainian | 0.46 | 0.94 | 0.75 | 0.35 |
**STA** - style accuracy
**SIM** - content similarity
**CHRF** - Fluency
**J** - joint
For more details about the metrics and data refer to the shared task page and the papers mentioned in citations section.
## Citation
The model is developed as a baseline for [TextDetox CLEF-2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html) shared task.
If you would like to acknowledge our work, please, cite the following manuscripts:
```
@inproceedings{dementieva2024overview,
title={Overview of the Multilingual Text Detoxification Task at PAN 2024},
author={Dementieva, Daryna and Moskovskiy, Daniil and Babakov, Nikolay and Ayele, Abinew Ali and Rizwan, Naquee and Schneider, Frolian and Wang, Xintog and Yimam, Seid Muhie and Ustalov, Dmitry and Stakovskii, Elisei and Smirnova, Alisa and Elnagar, Ashraf and Mukherjee, Animesh and Panchenko, Alexander},
booktitle={Working Notes of CLEF 2024 - Conference and Labs of the Evaluation Forum},
editor={Guglielmo Faggioli and Nicola Ferro and Petra Galu{\v{s}}{\v{c}}{\'a}kov{\'a} and Alba Garc{\'i}a Seco de Herrera},
year={2024},
organization={CEUR-WS.org}
}
```
```
@inproceedings{DBLP:conf/ecir/BevendorffCCDEFFKMMPPRRSSSTUWZ24,
author = {Janek Bevendorff and
Xavier Bonet Casals and
Berta Chulvi and
Daryna Dementieva and
Ashaf Elnagar and
Dayne Freitag and
Maik Fr{\"{o}}be and
Damir Korencic and
Maximilian Mayerl and
Animesh Mukherjee and
Alexander Panchenko and
Martin Potthast and
Francisco Rangel and
Paolo Rosso and
Alisa Smirnova and
Efstathios Stamatatos and
Benno Stein and
Mariona Taul{\'{e}} and
Dmitry Ustalov and
Matti Wiegmann and
Eva Zangerle},
editor = {Nazli Goharian and
Nicola Tonellotto and
Yulan He and
Aldo Lipani and
Graham McDonald and
Craig Macdonald and
Iadh Ounis},
title = {Overview of {PAN} 2024: Multi-author Writing Style Analysis, Multilingual
Text Detoxification, Oppositional Thinking Analysis, and Generative
{AI} Authorship Verification - Extended Abstract},
booktitle = {Advances in Information Retrieval - 46th European Conference on Information
Retrieval, {ECIR} 2024, Glasgow, UK, March 24-28, 2024, Proceedings,
Part {VI}},
series = {Lecture Notes in Computer Science},
volume = {14613},
pages = {3--10},
publisher = {Springer},
year = {2024},
url = {https://doi.org/10.1007/978-3-031-56072-9\_1},
doi = {10.1007/978-3-031-56072-9\_1},
timestamp = {Fri, 29 Mar 2024 23:01:36 +0100},
biburl = {https://dblp.org/rec/conf/ecir/BevendorffCCDEFFKMMPPRRSSSTUWZ24.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
WizWhite/Wizards_vintage_romance_novel-FLUX
|
WizWhite
| 2024-10-14T10:10:36Z | 50 | 7 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"objects",
"vintage",
"pulp art",
"cover art",
"text",
"books",
"harlequin",
"romance novel",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T10:04:56Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- objects
- vintage
- pulp art
- cover art
- text
- books
- harlequin
- romance novel
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Harlequin Romance Book Cover
widget:
- text: 'Photo of a harlequin romance book cover with the title "The Wizard''s Vintage Romance". The book is on a bedside table. It features a pulp art illustration of Wizard Whitebeard with a long white beard. Promotional sticker on the bottom say "THE ORIGINAL - now for FLUX"
'
output:
url: >-
31320069.jpeg
- text: 'a Harlequin romance book cover with the title "DARTH VADER the PANTY RAIDER"'
output:
url: >-
31320175.jpeg
- text: 'a Harlequin romance book cover with the title for "BATH MAN", depicting Batman in a bath tub. His butler Alfred is standing in the background, holding a towel'
output:
url: >-
31320275.jpeg
- text: 'a Harlequin romance book cover with the title "Stormy Sky: Moist Mushroom". with a pulp art illustration of a President sitting on the side of the bed, wearing a red MAGA cap'
output:
url: >-
31320311.jpeg
- text: 'a Harlequin romance book cover with the title "Executing manual override". Featuring C-3PO
'
output:
url: >-
31320375.jpeg
- text: 'a Harlequin romance book cover with the title "Cranking the stank shank". Featuring Doctor Strange
'
output:
url: >-
31320444.jpeg
- text: 'a Harlequin romance book cover with the title "Legend of The Manatee Lover"
'
output:
url: >-
31323398.jpeg
- text: 'a Harlequin romance book cover with the title "Legend of The Manatee Lover"
'
output:
url: >-
31323409.jpeg
- text: 'a Harlequin romance book cover with the title "Legend of The Manatee Lover"
'
output:
url: >-
31324316.jpeg
- text: 'Three Harlequin romance books on a table, the left one is called "Rock", the book in the middle has the title "Paper", and the one to the right is "Scissors"'
output:
url: >-
31323591.jpeg
- text: 'a Harlequin cook book cover with the title "Yeezy''s Fish Sticks Recipes". with a pulp art illustration of Kanye West in the kitchen'
output:
url: >-
31323718.jpeg
- text: 'Photo of a harlequin romance book cover with the title "The Wizard''s Vintage Romance". The book is on a bedside table. It features a pulp art illustration of Wizard Whitebeard with a long white beard. Promotional sticker on the bottom say "THE ORIGINAL - now for FLUX"
'
output:
url: >-
31324202.jpeg
---
# Wizard's Vintage Romance Novel
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h3 id="the-wizard's-vintage-romance-novel-bxf7xparw">The Wizard's Vintage Romance Novel</h3><p>The original is now available for FLUX.</p><p><strong>Best use for FLUX:</strong><br />• This LoRA is case-sensitive to a certain degree<br />• Prompt format 1: A Harlequin romance book cover with the title "YYY"<br />• Prompt format 2: Photo of a Harlequin romance book cover with the title "YYY"<br />• Use 2:3 Ratio for a single book, use landscape if you want more (adjust prompt)<br />• Feel free to ask questions in the comments<br />• Post in the gallery</p><p><strong>Most of my examples use these settings: </strong><br />Hyper 8-step LoRA, 10-12 steps DDIM Trailing (SGM Uniform), CFG: 3.5</p><p><strong><span style="color:rgb(250, 82, 82)">Be sure to check out my other 'Text'-based LoRAs: </span></strong><br /><a target="_blank" rel="ugc" href="https://civitai.com/user/WizardWhitebeard/models">https://civitai.com/user/WizardWhitebeard/models</a><strong><span style="color:rgb(250, 82, 82)"><br /></span></strong></p>
## Trigger words
You should use `Harlequin Romance Book Cover ` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-vintage-romance-novel/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-vintage-romance-novel', weight_name='Wiz-Vintage_Romance-FLUX.safetensors')
image = pipeline('Photo of a harlequin romance book cover with the title "The Wizard's Vintage Romance". The book is on a bedside table. It features a pulp art illustration of Wizard Whitebeard with a long white beard. Promotional sticker on the bottom say "THE ORIGINAL - now for FLUX"
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
anudeex/mistral-news_r
|
anudeex
| 2024-10-14T10:04:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T09:59:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
visithck/Qwen2_vl_finetuned
|
visithck
| 2024-10-14T10:03:27Z | 6 | 0 | null |
[
"safetensors",
"qwen2_vl",
"en",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:mit",
"region:us"
] | null | 2024-10-14T08:25:18Z |
---
license: mit
language:
- en
base_model:
- Qwen/Qwen2-VL-2B-Instruct
---
|
dascim/juribert-mini
|
dascim
| 2024-10-14T09:57:50Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"fr",
"arxiv:2110.01485",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-14T09:38:27Z |
---
language: fr
license: mit
pipeline_tag: fill-mask
library_name: transformers
---
# JuriBERT: A Masked-Language Model Adaptation for French Legal Text
## Introduction
[JuriBERT](https://arxiv.org/abs/2110.01485) is a set of BERT models (tiny, mini, small and base) pre-trained from scratch on French legal-domain specific corpora.
JuriBERT models are pretrained on 6.3GB of legal french raw text from two different sources: the first dataset is crawled from Légifrance and the other one consists of anonymized court’s decisions and the pleadings from the Court of Cassation (mémoires ampliatifs). The latter contains more than 100k long documents from different court cases.
It is now on Hugging Face in four different versions with varying number of parameters.
## JuriBERT Pre-trained models
| Model | #params | Architecture |
|--------------------------------|--------------------------------|-------|
| `dascim/juribert-tiny` | 6M | Tiny `(L=2, H=128, A=2)` |
| `dascim/juribert-mini` | 15M | Mini `(L=4, H=256, A=4)`|
| `dascim/juribert-small` | 42M | Small `(L=6, H=512, A=8)`|
| `dascim/juribert-base` | 110M | Base `(L=12, H=768, A=12)` |
## JuriBERT Usage
##### Load JuriBERT and its sub-word tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
# You can replace "juribert-base" with any other model from the table, e.g. "dascim/juribert-small".
tokenizer = AutoTokenizer.from_pretrained("dascim/juribert-base")
juribert = AutoModel.from_pretrained("dascim/juribert-base")
juribert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
juribert_fill_mask = pipeline("fill-mask", model="dascim/juribert-base", tokenizer="dascim/juribert-base")
results = juribert_fill_mask("la chambre <mask> est une chambre de la cour de cassation.")
# results
# [{'score': 0.3455437421798706, 'token': 579, 'token_str': ' civile', 'sequence': 'la chambre civile est une chambre de la cour de cassation.'},
# {'score': 0.13046401739120483, 'token': 397, 'token_str': ' qui', 'sequence': 'la chambre qui est une chambre de la cour de cassation.'},
# {'score': 0.12387491017580032, 'token': 1060, 'token_str': ' sociale', 'sequence': 'la chambre sociale est une chambre de la cour de cassation.'},
# {'score': 0.05491165071725845, 'token': 266, 'token_str': ' c', 'sequence': 'la chambre c est une chambre de la cour de cassation.'},
# {'score': 0.04244831204414368, 'token': 2421, 'token_str': ' commerciale', 'sequence': 'la chambre commerciale est une chambre de la cour de cassation.'}]
```
##### Extract contextual embedding features from JuriBERT output
```python
encoded_sentence = tokenizer.encode("Les articles 21 et 22 de la présente annexe sont applicables au titre V de la loi du 1er juin 1924 mettant en vigueur la législation civile française dans les départements du Bas-Rhin, du Haut-Rhin et de la Moselle, et relatif à l'exécution forcée sur les immeubles, à la procédure en matière de purge des hypothèques et à la procédure d'ordre.", return_tensors='pt')
embeddings = juribert(encoded_sentence).last_hidden_state
print(embeddings)
# tensor([[[-0.5490, -1.4505, -0.6244, ..., -0.9739, 0.4767, -0.0655],
# [ 0.6415, -1.4368, 0.8708, ..., -0.4093, 0.6691, 0.7238],
# [-0.2195, -0.1235, 0.2674, ..., 0.5372, -0.4903, 0.5960],
# ...,
# [-1.4168, -1.3238, 1.1748, ..., 0.7590, 1.0338, -0.4865],
# [-0.5240, -0.7168, 0.8667, ..., -0.5848, 1.0086, -1.3153],
# [ 0.2743, -0.3438, 1.1101, ..., -0.5587, 0.0830, -0.3144]]],
# grad_fn=<NativeLayerNormBackward0>)
```
## Authors
JuriBERT was trained and evaluated at École Polytechnique in collaboration with HEC Paris by Stella Douka, Hadi Abdine, Mihcalis Vazirgiannis, Rajaa El Hamdani and David Restrepo Amariles.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{douka-etal-2021-juribert,
title = "{J}uri{BERT}: A Masked-Language Model Adaptation for {F}rench Legal Text",
author="Douka, Stella and Abdine, Hadi and Vazirgiannis, Michalis and El Hamdani, Rajaa and Restrepo Amariles, David",
booktitle="Proceedings of the Natural Legal Language Processing Workshop 2021",
month=nov,
year="2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nllp-1.9",
pages = "95--101",
}
```
|
dascim/juribert-small
|
dascim
| 2024-10-14T09:57:15Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"fr",
"arxiv:2110.01485",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-14T09:39:36Z |
---
language: fr
license: mit
pipeline_tag: fill-mask
library_name: transformers
---
# JuriBERT: A Masked-Language Model Adaptation for French Legal Text
## Introduction
[JuriBERT](https://arxiv.org/abs/2110.01485) is a set of BERT models (tiny, mini, small and base) pre-trained from scratch on French legal-domain specific corpora.
JuriBERT models are pretrained on 6.3GB of legal french raw text from two different sources: the first dataset is crawled from Légifrance and the other one consists of anonymized court’s decisions and the pleadings from the Court of Cassation (mémoires ampliatifs). The latter contains more than 100k long documents from different court cases.
It is now on Hugging Face in four different versions with varying number of parameters.
## JuriBERT Pre-trained models
| Model | #params | Architecture |
|--------------------------------|--------------------------------|-------|
| `dascim/juribert-tiny` | 6M | Tiny `(L=2, H=128, A=2)` |
| `dascim/juribert-mini` | 15M | Mini `(L=4, H=256, A=4)`|
| `dascim/juribert-small` | 42M | Small `(L=6, H=512, A=8)`|
| `dascim/juribert-base` | 110M | Base `(L=12, H=768, A=12)` |
## JuriBERT Usage
##### Load JuriBERT and its sub-word tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
# You can replace "juribert-base" with any other model from the table, e.g. "dascim/juribert-small".
tokenizer = AutoTokenizer.from_pretrained("dascim/juribert-base")
juribert = AutoModel.from_pretrained("dascim/juribert-base")
juribert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
juribert_fill_mask = pipeline("fill-mask", model="dascim/juribert-base", tokenizer="dascim/juribert-base")
results = juribert_fill_mask("la chambre <mask> est une chambre de la cour de cassation.")
# results
# [{'score': 0.3455437421798706, 'token': 579, 'token_str': ' civile', 'sequence': 'la chambre civile est une chambre de la cour de cassation.'},
# {'score': 0.13046401739120483, 'token': 397, 'token_str': ' qui', 'sequence': 'la chambre qui est une chambre de la cour de cassation.'},
# {'score': 0.12387491017580032, 'token': 1060, 'token_str': ' sociale', 'sequence': 'la chambre sociale est une chambre de la cour de cassation.'},
# {'score': 0.05491165071725845, 'token': 266, 'token_str': ' c', 'sequence': 'la chambre c est une chambre de la cour de cassation.'},
# {'score': 0.04244831204414368, 'token': 2421, 'token_str': ' commerciale', 'sequence': 'la chambre commerciale est une chambre de la cour de cassation.'}]
```
##### Extract contextual embedding features from JuriBERT output
```python
encoded_sentence = tokenizer.encode("Les articles 21 et 22 de la présente annexe sont applicables au titre V de la loi du 1er juin 1924 mettant en vigueur la législation civile française dans les départements du Bas-Rhin, du Haut-Rhin et de la Moselle, et relatif à l'exécution forcée sur les immeubles, à la procédure en matière de purge des hypothèques et à la procédure d'ordre.", return_tensors='pt')
embeddings = juribert(encoded_sentence).last_hidden_state
print(embeddings)
# tensor([[[-0.5490, -1.4505, -0.6244, ..., -0.9739, 0.4767, -0.0655],
# [ 0.6415, -1.4368, 0.8708, ..., -0.4093, 0.6691, 0.7238],
# [-0.2195, -0.1235, 0.2674, ..., 0.5372, -0.4903, 0.5960],
# ...,
# [-1.4168, -1.3238, 1.1748, ..., 0.7590, 1.0338, -0.4865],
# [-0.5240, -0.7168, 0.8667, ..., -0.5848, 1.0086, -1.3153],
# [ 0.2743, -0.3438, 1.1101, ..., -0.5587, 0.0830, -0.3144]]],
# grad_fn=<NativeLayerNormBackward0>)
```
## Authors
JuriBERT was trained and evaluated at École Polytechnique in collaboration with HEC Paris by Stella Douka, Hadi Abdine, Mihcalis Vazirgiannis, Rajaa El Hamdani and David Restrepo Amariles.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{douka-etal-2021-juribert,
title = "{J}uri{BERT}: A Masked-Language Model Adaptation for {F}rench Legal Text",
author="Douka, Stella and Abdine, Hadi and Vazirgiannis, Michalis and El Hamdani, Rajaa and Restrepo Amariles, David",
booktitle="Proceedings of the Natural Legal Language Processing Workshop 2021",
month=nov,
year="2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nllp-1.9",
pages = "95--101",
}
```
|
dascim/juribert-base
|
dascim
| 2024-10-14T09:56:46Z | 191 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"fr",
"arxiv:2110.01485",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-10-14T09:40:19Z |
---
language: fr
license: mit
pipeline_tag: fill-mask
library_name: transformers
---
# JuriBERT: A Masked-Language Model Adaptation for French Legal Text
## Introduction
[JuriBERT](https://arxiv.org/abs/2110.01485) is a set of BERT models (tiny, mini, small and base) pre-trained from scratch on French legal-domain specific corpora.
JuriBERT models are pretrained on 6.3GB of legal french raw text from two different sources: the first dataset is crawled from Légifrance and the other one consists of anonymized court’s decisions and the pleadings from the Court of Cassation (mémoires ampliatifs). The latter contains more than 100k long documents from different court cases.
It is now on Hugging Face in four different versions with varying number of parameters.
## JuriBERT Pre-trained models
| Model | #params | Architecture |
|--------------------------------|--------------------------------|-------|
| `dascim/juribert-tiny` | 6M | Tiny `(L=2, H=128, A=2)` |
| `dascim/juribert-mini` | 15M | Mini `(L=4, H=256, A=4)`|
| `dascim/juribert-small` | 42M | Small `(L=6, H=512, A=8)`|
| `dascim/juribert-base` | 110M | Base `(L=12, H=768, A=12)` |
## JuriBERT Usage
##### Load JuriBERT and its sub-word tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
# You can replace "juribert-base" with any other model from the table, e.g. "dascim/juribert-small".
tokenizer = AutoTokenizer.from_pretrained("dascim/juribert-base")
juribert = AutoModel.from_pretrained("dascim/juribert-base")
juribert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
juribert_fill_mask = pipeline("fill-mask", model="dascim/juribert-base", tokenizer="dascim/juribert-base")
results = juribert_fill_mask("la chambre <mask> est une chambre de la cour de cassation.")
# results
# [{'score': 0.3455437421798706, 'token': 579, 'token_str': ' civile', 'sequence': 'la chambre civile est une chambre de la cour de cassation.'},
# {'score': 0.13046401739120483, 'token': 397, 'token_str': ' qui', 'sequence': 'la chambre qui est une chambre de la cour de cassation.'},
# {'score': 0.12387491017580032, 'token': 1060, 'token_str': ' sociale', 'sequence': 'la chambre sociale est une chambre de la cour de cassation.'},
# {'score': 0.05491165071725845, 'token': 266, 'token_str': ' c', 'sequence': 'la chambre c est une chambre de la cour de cassation.'},
# {'score': 0.04244831204414368, 'token': 2421, 'token_str': ' commerciale', 'sequence': 'la chambre commerciale est une chambre de la cour de cassation.'}]
```
##### Extract contextual embedding features from JuriBERT output
```python
encoded_sentence = tokenizer.encode("Les articles 21 et 22 de la présente annexe sont applicables au titre V de la loi du 1er juin 1924 mettant en vigueur la législation civile française dans les départements du Bas-Rhin, du Haut-Rhin et de la Moselle, et relatif à l'exécution forcée sur les immeubles, à la procédure en matière de purge des hypothèques et à la procédure d'ordre.", return_tensors='pt')
embeddings = juribert(encoded_sentence).last_hidden_state
print(embeddings)
# tensor([[[-0.5490, -1.4505, -0.6244, ..., -0.9739, 0.4767, -0.0655],
# [ 0.6415, -1.4368, 0.8708, ..., -0.4093, 0.6691, 0.7238],
# [-0.2195, -0.1235, 0.2674, ..., 0.5372, -0.4903, 0.5960],
# ...,
# [-1.4168, -1.3238, 1.1748, ..., 0.7590, 1.0338, -0.4865],
# [-0.5240, -0.7168, 0.8667, ..., -0.5848, 1.0086, -1.3153],
# [ 0.2743, -0.3438, 1.1101, ..., -0.5587, 0.0830, -0.3144]]],
# grad_fn=<NativeLayerNormBackward0>)
```
## Authors
JuriBERT was trained and evaluated at École Polytechnique in collaboration with HEC Paris by Stella Douka, Hadi Abdine, Mihcalis Vazirgiannis, Rajaa El Hamdani and David Restrepo Amariles.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{douka-etal-2021-juribert,
title = "{J}uri{BERT}: A Masked-Language Model Adaptation for {F}rench Legal Text",
author="Douka, Stella and Abdine, Hadi and Vazirgiannis, Michalis and El Hamdani, Rajaa and Restrepo Amariles, David",
booktitle="Proceedings of the Natural Legal Language Processing Workshop 2021",
month=nov,
year="2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.nllp-1.9",
pages = "95--101",
}
```
|
zhangtaolab/plant-dnagpt-6mer-promoter
|
zhangtaolab
| 2024-10-14T09:55:14Z | 8 | 0 | null |
[
"safetensors",
"gpt2",
"DNA",
"biology",
"genomics",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-10-13T02:22:08Z |
---
license: cc-by-nc-sa-4.0
widget:
- text: AAAACATAATAATTTGCCGACTTACTCACCCTGTGATTAATCTATTTTCACTGTGTAGTAAGTAGAGAGTGTTACTTACTACAGTATCTATTTTTGTTTGGATGTTTGCCGTGGACAAGTGCTAACTGTCAAAACCCGTTTTGACCTTAAACCCAGCAATAATAATAATGTAAAACTCCATTGGGCAGTGCAACCTACTCCTCACATATTATATTATAATTCCTAAACCTTGATCAGTTAAATTAATAGCTCTGTTCCCTGTGGCTTTATATAAACACCATGGTTGTCAGCAGTTCAGCA
tags:
- DNA
- biology
- genomics
---
# Plant foundation DNA large language models
The plant DNA large language models (LLMs) contain a series of foundation models based on different model architectures, which are pre-trained on various plant reference genomes.
All the models have a comparable model size between 90 MB and 150 MB, BPE tokenizer is used for tokenization and 8000 tokens are included in the vocabulary.
**Developed by:** zhangtaolab
### Model Sources
- **Repository:** [Plant DNA LLMs](https://github.com/zhangtaolab/plant_DNA_LLMs)
- **Manuscript:** [Versatile applications of foundation DNA large language models in plant genomes]()
### Architecture
The model is trained based on the OpenAI GPT-2 model with modified tokenizer specific for DNA sequence.
This model is fine-tuned for predicting active core promoters.
### How to use
Install the runtime library first:
```bash
pip install transformers
```
Here is a simple code for inference:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = 'plant-dnagpt-6mer-promoter'
# load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(f'zhangtaolab/{model_name}', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(f'zhangtaolab/{model_name}', trust_remote_code=True)
# inference
sequences = ['TTACTAAATTTATAACGATTTTTTATCTAACTTTAGCTCATCAATCTTTACCGTGTCAAAATTTAGTGCCAAGAAGCAGACATGGCCCGATGATCTTTTACCCTGTTTTCATAGCTCGCGAGCCGCGACCTGTGTCCAACCTCAACGGTCACTGCAGTCCCAGCACCTCAGCAGCCTGCGCCTGCCATACCCCCTCCCCCACCCACCCACACACACCATCCGGGCCCACGGTGGGACCCAGATGTCATGCGCTGTACGGGCGAGCAACTAGCCCCCACCTCTTCCCAAGAGGCAAAACCT',
'GACCTAATGATTAACCAAGGAAAAATGCAAGGATTTGACAAAAATATAGAAGCCAATGCTAGGCGCCTAAGTGAATGGATATGAAACAAAAAGCGAGCAGGCTGTCTATATATGGACAATTAGTTGCATTAATATAGTAGTTTATAATTGCAAGCATGGCACTACATCACAACACCTAAAAGACATGCCGTGATGCTAGAACAGCCATTGAATAAATTAGAAAGAAAGGTTGTGGTTAATTAGTTAACGACCAATCGAGCCTACTAGTATAAATTGTACCTCGTTGTTATGAAGTAATTC']
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer,
trust_remote_code=True, top_k=None)
results = pipe(sequences)
print(results)
```
### Training data
We use GPT2ForSequenceClassification to fine-tune the model.
Detailed training procedure can be found in our manuscript.
#### Hardware
Model was trained on a NVIDIA GTX1080Ti GPU (11 GB).
|
s-nlp/xlmr_formality_classifier
|
s-nlp
| 2024-10-14T09:53:27Z | 623 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"formal or informal classification",
"en",
"fr",
"it",
"pt",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
- fr
- it
- pt
tags:
- formal or informal classification
licenses:
- cc-by-nc-sa
license: openrail++
base_model:
- FacebookAI/xlm-roberta-base
---
**Model Overview**
This is the model presented in the paper ["Detecting Text Formality: A Study of Text Classification Approaches"](https://aclanthology.org/2023.ranlp-1.31/).
XLM-Roberta-based classifier trained on [XFORMAL](https://aclanthology.org/2021.naacl-main.256.bib) -- a multilingual formality classification dataset.
**Results**
All languages
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.744912 | 0.927790 | 0.826354 | 108019 |
| 1 | 0.889088 | 0.645630 | 0.748048 | 96845 |
| accuracy | | | 0.794405 | 204864 |
| macro avg | 0.817000 | 0.786710 | 0.787201 | 204864 |
| weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 |
EN
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.800053 | 0.962981 | 0.873988 | 22151 |
| 1 | 0.945106 | 0.725899 | 0.821124 | 19449 |
| accuracy | | | 0.852139 | 41600 |
| macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 |
| weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 |
FR
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.746709 | 0.925738 | 0.826641 | 21505 |
| 1 | 0.887305 | 0.650592 | 0.750731 | 19327 |
| accuracy | | | 0.795504 | 40832 |
| macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 |
| weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 |
IT
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.721282 | 0.914669 | 0.806545 | 21528 |
| 1 | 0.864887 | 0.607135 | 0.713445 | 19368 |
| accuracy | | | 0.769024 | 40896 |
| macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 |
| weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 |
PT
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.717546 | 0.908167 | 0.801681 | 21637 |
| 1 | 0.853628 | 0.599700 | 0.704481 | 19323 |
| accuracy | | | 0.762646 | 40960 |
| macro avg | 0.785587 | 0.753933 | 0.753081 | 40960 |
| weighted avg | 0.781743 | 0.762646 | 0.755826 | 40960 |
## How to use
```python
from transformers import XLMRobertaTokenizerFast, XLMRobertaForSequenceClassification
# load tokenizer and model weights
tokenizer = XLMRobertaTokenizerFast.from_pretrained('s-nlp/xlmr_formality_classifier')
model = XLMRobertaForSequenceClassification.from_pretrained('s-nlp/xlmr_formality_classifier')
id2formality = {0: "formal", 1: "informal"}
texts = [
"I like you. I love you",
"Hey, what's up?",
"Siema, co porabiasz?",
"I feel deep regret and sadness about the situation in international politics.",
]
# prepare the input
encoding = tokenizer(
texts,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding="max_length",
return_tensors="pt",
)
# inference
output = model(**encoding)
formality_scores = [
{id2formality[idx]: score for idx, score in enumerate(text_scores.tolist())}
for text_scores in output.logits.softmax(dim=1)
]
formality_scores
```
```
[{'formal': 0.993225634098053, 'informal': 0.006774314679205418},
{'formal': 0.8807966113090515, 'informal': 0.1192033663392067},
{'formal': 0.936184287071228, 'informal': 0.06381577253341675},
{'formal': 0.9986615180969238, 'informal': 0.0013385231141000986}]
```
## Citation
```
@inproceedings{dementieva-etal-2023-detecting,
title = "Detecting Text Formality: A Study of Text Classification Approaches",
author = "Dementieva, Daryna and
Babakov, Nikolay and
Panchenko, Alexander",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.31",
pages = "274--284",
abstract = "Formality is one of the important characteristics of text documents. The automatic detection of the formality level of a text is potentially beneficial for various natural language processing tasks. Before, two large-scale datasets were introduced for multiple languages featuring formality annotation{---}GYAFC and X-FORMAL. However, they were primarily used for the training of style transfer models. At the same time, the detection of text formality on its own may also be a useful application. This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods and delivers the best-performing models for public usage. We conducted three types of experiments {--} monolingual, multilingual, and cross-lingual. The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task, while Transformer-based classifiers are more stable to cross-lingual knowledge transfer.",
}
```
## Licensing Information
This model is licensed under the OpenRAIL++ License, which supports the development of various technologies—both industrial and academic—that serve the public good.
|
mav23/Qwen2.5-Math-72B-Instruct-GGUF
|
mav23
| 2024-10-14T09:51:16Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"text-generation",
"en",
"arxiv:2409.12122",
"base_model:Qwen/Qwen2.5-Math-72B",
"base_model:quantized:Qwen/Qwen2.5-Math-72B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-14T04:55:05Z |
---
license: other
license_name: qwen
base_model: Qwen/Qwen2.5-Math-72B
license_link: https://huggingface.co/Qwen/Qwen2.5-Math-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
library_name: transformers
---
# Qwen2.5-Math-72B-Instruct
> [!Warning]
> <div align="center">
> <b>
> 🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.
> </b>
> </div>
## Introduction
In August 2024, we released the first series of mathematical LLMs - [Qwen2-Math](https://qwenlm.github.io/blog/qwen2-math/) - of our Qwen family. A month later, we have upgraded it and open-sourced **Qwen2.5-Math** series, including base models **Qwen2.5-Math-1.5B/7B/72B**, instruction-tuned models **Qwen2.5-Math-1.5B/7B/72B-Instruct**, and mathematical reward model **Qwen2.5-Math-RM-72B**.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.

While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation. Qwen2.5-Math-1.5B/7B/72B-Instruct achieve 79.7, 85.3, and 87.8 respectively on the MATH benchmark using TIR.
## Model Details
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2.5-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2.5-Math).
## Requirements
* `transformers>=4.37.0` for Qwen2.5-Math models. The latest version is recommended.
> [!Warning]
> <div align="center">
> <b>
> 🚨 This is a must because <code>transformers</code> integrated Qwen2 codes since <code>4.37.0</code>.
> </b>
> </div>
For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Quick Start
> [!Important]
>
> **Qwen2.5-Math-72B-Instruct** is an instruction model for chatting;
>
> **Qwen2.5-Math-72B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning.
>
### 🤗 Hugging Face Transformers
Qwen2.5-Math can be deployed and infered in the same way as [Qwen2.5](https://github.com/QwenLM/Qwen2.5). Here we show a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Math-72B-Instruct"
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
# CoT
messages = [
{"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
# TIR
messages = [
{"role": "system", "content": "Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \\boxed{}."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Citation
If you find our work helpful, feel free to give us a citation.
```
@article{yang2024qwen25mathtechnicalreportmathematical,
title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement},
author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
journal={arXiv preprint arXiv:2409.12122},
year={2024}
}
```
|
WizWhite/sol-y-luna
|
WizWhite
| 2024-10-14T09:50:19Z | 9 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"style",
"mythology",
"poster",
"comic",
"mesoamerica",
"hand drawn",
"mexican folk art",
"hand-lettering",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T09:50:17Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- style
- mythology
- poster
- comic
- mesoamerica
- hand drawn
- mexican folk art
- hand-lettering
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: 'vibrant poster art, featuring the title "SOL y LUNA" in the center with a hand-lettering folk-art style font, adorned with intricate flourishes . Surrounding the text, a stylized solar eclipse diagram is depicted in a geometric style. The background includes hints of papel picado motifs, with subtle floral and celestial elements woven into the design.
'
output:
url: >-
32826524.jpeg
- text: ' '
output:
url: >-
32826599.jpeg
- text: 'folk art poster. Muffled screams coming from behind a locked door
'
output:
url: >-
32826756.jpeg
- text: 'folk art poster. Portrait’s eyes moving as a shadow passes'
output:
url: >-
32826782.jpeg
- text: 'folk art poster. Pale hand reaching out from underneath a bed
'
output:
url: >-
32826786.jpeg
- text: 'folk art style Jack The Ripper, highly detailed, Slate Grey and peachy and Fern Green colors, with Fiery-shouldered Parakeet and floral decoration
'
output:
url: >-
32826885.jpeg
- text: 'a kiwi with a face and laughing heartily, donning a Space helmet.
'
output:
url: >-
32827008.jpeg
- text: 'a parsnip with a face and whistling lips, donning a Hairnet.
'
output:
url: >-
32827235.jpeg
- text: 'a spinach with a face and frowning in concentration, donning a Sailor hat.
'
output:
url: >-
32827297.jpeg
- text: 'a banana with a face, donning a large Jester hat'
output:
url: >-
32827334.jpeg
- text: 'an orange with a face, donning a large mustache'
output:
url: >-
32827431.jpeg
- text: 'Aztec mythology poster art depicting Danse Macabre, depicted in a Brujeria mesoamerican cosmovision style.
'
output:
url: >-
32827502.jpeg
- text: 'Aztec mythology poster art depicting Danse Macabre, depicted in a Brujeria mesoamerican cosmovision style.
'
output:
url: >-
32827561.jpeg
- text: 'Aztec mythology poster art depicting Danse Macabre, depicted in a Brujeria mesoamerican cosmovision style.
'
output:
url: >-
32827606.jpeg
- text: 'Aztec mythology poster art depicting Danse Macabre, depicted in a Brujeria mesoamerican cosmovision style.
'
output:
url: >-
32827642.jpeg
- text: 'folk art poster. Clown mask lying in a pool of blood
'
output:
url: >-
32827795.jpeg
- text: 'folk art poster. Creaking sound of an old rocking chair moving on its own
'
output:
url: >-
32827849.jpeg
- text: 'folk art poster. Something scuttling behind the walls, just out of sight
'
output:
url: >-
32827911.jpeg
- text: 'folk art style __character__, highly detailed, __colorScheme__ colors, with __bird__ and floral decoration'
output:
url: >-
32828062.jpeg
- text: 'folk art style __character__, highly detailed, __colorScheme__ colors, with __bird__ and floral decoration'
output:
url: >-
32827999.jpeg
---
# Sol y Luna
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p><strong><span style="color:#fab005">Sol y Luna</span></strong><br />Mexican folk-art meets comic illustration. Great with hand-lettering, mesoamerican mythology, busy posters, florals, animals and faces. Not so great with feet and landscapes.</p><p>Prompt inspiration:<span style="color:rgb(193, 194, 197)"> </span><code>Poster art</code><span style="color:rgb(193, 194, 197)">, </span><code>hand-lettering</code><span style="color:rgb(193, 194, 197)">, </span><code>calavera</code><span style="color:rgb(193, 194, 197)">, </span><code>folk-art</code><span style="color:rgb(193, 194, 197)">, </span><code>floral decoration, papel picado, intricate flourishes, sun and moon </code></p><p><em>Please share your creations in the gallery </em><strong>❤</strong></p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/sol-y-luna/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/sol-y-luna', weight_name='SOL_y_LUNA.safetensors')
image = pipeline('folk art style __character__, highly detailed, __colorScheme__ colors, with __bird__ and floral decoration').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
WizWhite/wizard-s-grimdark-the-gloom
|
WizWhite
| 2024-10-14T09:50:05Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"horror",
"dark souls",
"concept art",
"grimdark",
"gothic",
"dark fantasy",
"style",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T09:50:01Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- horror
- dark souls
- concept art
- grimdark
- gothic
- dark fantasy
- style
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Gloomy
widget:
- text: 'gloomy scene from dark fantasy, Maggots crawling from a cracked skull in a crypt'
parameters:
negative_prompt: with acceleration LoRA
output:
url: >-
32835079.jpeg
- text: 'gloomy grimdark fantasy poster. The ghost of wizard whitebeard with a long beard. Glowing ember trail spells out "The Gloom"'
output:
url: >-
32830814.jpeg
- text: 'gloomy dark fantasy movie poster. a messy teenager''s bedroom'
output:
url: >-
32831147.jpeg
- text: 'gloomy dark fantasy batman'
output:
url: >-
32831237.jpeg
- text: 'gloomy dark fantasy movie poster. Cinematic photo of the ghost of wizard whitebeard ,with a long beard. Glowing ember trail spells out "The Gloom"'
output:
url: >-
32831041.jpeg
- text: 'gloomy scene from a dark fantasy buddy-cop tv show featuring police officers from 1970, Cop and partner bonding over their backstories,'
output:
url: >-
32831386.jpeg
- text: 'gloomy scene from dark fantasy, Polaroid pictures revealing ghostly figures'
output:
url: >-
32831574.jpeg
- text: 'gloomy scene from dark fantasy, Breathing heard from under the bed, though no one’s there
'
output:
url: >-
32831902.jpeg
- text: 'gloomy scene from dark fantasy, Rotting flesh peeling away from a still-moving hand
'
output:
url: >-
32832171.jpeg
- text: 'gloomy scene from dark fantasy, Blood dripping from the ceiling of a darkened hallway'
output:
url: >-
32832279.jpeg
- text: 'gloomy scene, A rebel with a futuristic weapon, overlooking a dystopian cityscape.
'
output:
url: >-
32832340.jpeg
- text: 'gloomy scene from dark fantasy, Unseen force yanking someone’s legs from beneath the bed
'
output:
url: >-
32832477.jpeg
- text: 'gloomy scene from dark fantasy, Unseen force dragging someone screaming into the darkness
'
output:
url: >-
32832938.jpeg
---
# Wizard's Grimdark: The Gloom
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p><strong><span style="color:#868e96">Wizard's Grimdark: </span><span style="color:#fd7e14">The Gloom</span></strong><br />A dark fantasy LoRA trained on video game concept art images, most prominently from the Dark Souls triology. </p><p><span style="color:rgb(193, 194, 197)">Prompt inspiration: </span><code>Gloomy</code><span style="color:rgb(193, 194, 197)">, </span><code>ethereal</code><span style="color:rgb(193, 194, 197)">, </span><code>dark fantasy</code><span style="color:rgb(193, 194, 197)">, </span><code>horror</code><span style="color:rgb(193, 194, 197)">, </span><code>grimdark</code><span style="color:rgb(193, 194, 197)">, </span><code>mist</code><span style="color:rgb(193, 194, 197)">, </span><code>ghastly figure</code><span style="color:rgb(193, 194, 197)">, </span><code>concept art</code></p>
## Trigger words
You should use `Gloomy` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-grimdark-the-gloom/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-grimdark-the-gloom', weight_name='Wiz-The_Gloom.safetensors')
image = pipeline('gloomy scene from dark fantasy, Unseen force dragging someone screaming into the darkness
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
WizWhite/wizard-s-vintage-sardine-tins
|
WizWhite
| 2024-10-14T09:49:34Z | 34 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"vintage",
"concept",
"packaging",
"text",
"product",
"tinned fish",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T09:49:32Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=False&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- vintage
- concept
- packaging
- text
- product
- tinned fish
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Vintage Sardine Tin
widget:
- text: 'photo of a vintage sardine tin, the title in white reads: "Wizard''s Sardines" in a bold font, portrait of wizard whitebeard, red and yellow background color, small text in the bottom says "Guaranteed Quality" and "Best Sardine LoRA on CIVITAI", the tin is laying on a pile of atlantic mackerel'
output:
url: >-
27596823.jpeg
- text: 'vintage dogg tin, featuring a portrait of snoop dogg, title text reads "Dogg food" bold in a black gothic blackletter fraktur font, bright yellow background color, small text says "Sardizzles in tomizzle-sizzle"
'
output:
url: >-
27596935.jpeg
- text: 'vintage sardine tin, anthropomorphic fish detective, the fish is wearing a trench coat, title text reads "it smells fishy", 1950s pulp art aesthetics
'
output:
url: >-
27596988.jpeg
- text: 'tinned blobfish, featuring a slimy and pink blobfish, with drool dripping from the mouth, the background is in pink with navy blue colored text that says "The Slobbin '' Blobfish" and "brand", small text in the bottom reads "Battered blobfish in Olive Oil"
'
output:
url: >-
27597084.jpeg
- text: ' '
output:
url: >-
27597266.jpeg
- text: 'vintage sardine tin of "swedish fish", with a photo of swedish fish candy, red and yellow,
'
output:
url: >-
27597293.jpeg
- text: ' '
output:
url: >-
27597440.jpeg
- text: 'vintage sardine tin, "the Twinkling Sasquatch", Indigo Blue and Peachy Orange and Lime Green colors, large title text
'
output:
url: >-
27597577.jpeg
- text: ' '
output:
url: >-
27597661.jpeg
- text: ' '
output:
url: >-
27597694.jpeg
- text: ' '
output:
url: >-
27597738.jpeg
- text: 'vintage sardine tin, "the Glittering Ghost", Teal and Peachy Pink and Lavender Purple colors
'
output:
url: >-
27597784.jpeg
- text: ' '
output:
url: >-
27598137.jpeg
- text: ' '
output:
url: >-
27598143.jpeg
- text: ' '
output:
url: >-
27599510.jpeg
- text: 'vintage foie gras tin, with an image of Holgersson holding a dead goose by its neck, title text reads "Foie Gras de Nils" in a serif font, "Have your friends for dinner", on a wooden table surface
'
output:
url: >-
27783464.jpeg
- text: 'vintage foie gras tin, with an image of Holgersson holding a dead goose by its neck, title text reads "Foie Gras de Nils" in a serif font, "Have your friends for dinner", on a wooden table surface'
output:
url: >-
27783457.jpeg
- text: 'vintage foie gras tin, with an image of Holgersson holding a dead goose by its neck, title text reads "Foie Gras de Nils" in a serif font, "Have your friends for dinner", on a wooden table surface
'
output:
url: >-
27783482.jpeg
---
# Wizard's Vintage Sardine Tins
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h3 id="hey-glad-you're-into-making-sardine-tins!-let's-go!-kuhjgsed4"><strong><span style="color:#228be6">Hey, glad you're into making sardine tins! Let's go! </span></strong></h3><p>Just use <code>Vintage Sardine Tin</code>in your prompt, or <code>Vintage […] Tin</code> if you want to do something else. (If you're more into mackerels or anchovies – that's ok too!)<br /><br /><strong>Example prompts:</strong></p><pre><code>vintage {fish} tin, featuring a {subject}, with {description/action/style}, title text reads "{title}" in a {adjective/style} font, {colors} background, small text in the bottom say "{fineprint}"</code></pre><pre><code>photo of a vintage sardine tin, the title in white reads: "{title}" in a bold font, portrait of {person}, {colors} background color, small text in the bottom says "{fineprint}" and "{fineprint}", the tin is on a surface made of {material} </code></pre><p>Write in the comment field if you have any questions! Post your tins to the gallery!</p>
## Trigger words
You should use `Vintage Sardine Tin`, `Tinned Fish`, `vintage xyz tin` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-vintage-sardine-tins/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-vintage-sardine-tins', weight_name='Wiz-SardineTins_Flux.safetensors')
image = pipeline('vintage foie gras tin, with an image of Holgersson holding a dead goose by its neck, title text reads "Foie Gras de Nils" in a serif font, "Have your friends for dinner", on a wooden table surface
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
WizWhite/wizard-s-acid-reflux
|
WizWhite
| 2024-10-14T09:49:23Z | 1,825 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"psychedelic",
"style",
"blacklight",
"acid",
"lowbrow art",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-12T13:03:11Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- psychedelic
- style
- blacklight
- acid
- lowbrow art
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: 'psychadelic acid art, render of Majestic wizard whitebeard hologram, flourescent yellow, green gel lighting, poster art, title reads "ACID REFLUX"
'
output:
url: >-
26104803.jpeg
- text: 'psychadelic acid art, Illuminated kraken skull with tentacles, vibrant terracotta colored , noon, poster art, glowing
'
output:
url: >-
26104983.jpeg
- text: 'psychadelic acid art, Illuminated kraken with terracotta tentacles, noon, poster art
'
output:
url: >-
26104994.jpeg
- text: 'psychadelic acid art, Lustrous kraken with Cracked earth tentacles, evening, poster art
'
output:
url: >-
26104819.jpeg
- text: 'psychadelic acid art, vibrant flourescent colors, On the Invisible Train to Nowhere, poster art'
output:
url: >-
26104838.jpeg
- text: 'psychadelic acid pop art lowbrow snot style closeup portrait of zombie darth vader, His iconic helmet is warped and melting, with vibrant, fluorescent colors like neon pinks, greens, and blues dripping down its surface. The details of the helmet are exaggerated, with bulging, bloodshot eyes peering out from beneath. The mouthpiece is oozing green slime, adding to the grotesque, snotty aesthetic. The background is a chaotic swirl of bright, clashing colors, filled with abstract shapes and patterns reminiscent of a trippy, 70s-inspired acid trip. The overall vibe is grotesque yet playful, blending dark humor with vivid, surreal imagery
'
output:
url: >-
26105145.jpeg
- text: 'psychadelic acid pop art lowbrow snot style concert poster, skeletor wizard whitebeard is twisted warped and melting, with vibrant fluorescent colors like neon pinks and cyan dripping down. Chaotic swirl of bright clashing colors, filled with abstract shapes and trippy patterns, The overall vibe is grotesque yet playful, blending dark humor with vivid, surreal imagery
'
output:
url: >-
26105199.jpeg
- text: 'psychadelic acid art, extreme close-up of squidward, extremely detailed texture
'
output:
url: >-
26105235.jpeg
- text: 'psychadelic acid art, vibrant flourescent colors, Tango with a T-Rex, poster art
'
output:
url: >-
26105256.jpeg
- text: 'psychadelic acid art concert poster for "wizard whitebeard"'
output:
url: >-
26116077.jpeg
- text: 'psychadelic acid art portrait of a Bodil Hess'
output:
url: >-
26116172.jpeg
- text: 'acid art skeletor, neon green and yellow, wet gel texture
'
output:
url: >-
26116346.jpeg
- text: 'psychadelic acid art, vibrant flourescent colors, City of Glass and Steel, poster art
'
output:
url: >-
26116482.jpeg
- text: 'psychadelic acid art, in the center a dripping large detailed title reads: "IT''S FUNKADELICS FAULT", behind it wizard whitebeard wraps around in-and-out of the letters filling up the whole image, detailed texture, vibrant flourenscent, tentacles and slime, exaggerated'
output:
url: >-
26156781.jpeg
- text: ' '
output:
url: >-
32780775.jpeg
---
# Wizard's Acid ReFlux
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p><strong>Psychedelic Acid Lowbrow Blacklight Horror Concert poster + snot</strong></p><p>Or something to that effect.</p><p>I collected a dataset of all sorts of images for this experimental LoRA; acid art, psychedelic stuff, lowbrow illustration, concert posters, horror tarots, snot style illustration, blacklight illustrations. Ended up with a very fun LoRA that I haven't figured out myself really.</p><pre><code>Trigger: Psychadelic acid art</code></pre><p>Realize I misspelled Pyschedelic (I blame Funkadelics) but that was what I used in my captions</p><pre><code>Useful tags: Flourescent, concert poster, drips, snot art, lowbrow, zombie, neon, grotesque, slime, oozing, snotty, lsd trip, trippy </code></pre><p>Let me know if you have any questions. I've got hundreds myself!</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-acid-reflux/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-acid-reflux', weight_name='Wizards_Acid_Reflux.safetensors')
image = pipeline('Your custom prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
WizWhite/wizard-s-scrap-yard-supermarionation-puppets
|
WizWhite
| 2024-10-14T09:49:12Z | 7 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"puppet",
"supermarionation",
"ugly",
"style",
"marionettes",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-12T13:03:26Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- puppet
- supermarionation
- ugly
- style
- marionettes
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Thunderbirds Puppet
widget:
- text: 'cinematic scene depicting a thunderbirds puppet wizard whitebeard laughing, the wizard has a long white beard, he is standing in a 1960s modernism style living room, bursting into a loud guffaw with an open mouth, dramatic lighting, ultra sharp image with high level of detail'
output:
url: >-
28257508.jpeg
- text: 'cinematic scene depicting a thunderbirds puppet blonde girl lying in the grass, she''s wearing black leotard and track pants, morning, dramatic lighting, ultra sharp image with high level of detail'
output:
url: >-
28254732.jpeg
- text: 'Pikachu Observing stars with a telescope
--
Puppet Pikachu Observing stars with a telescope
--
Thunderbirds Puppet Pikachu Observing stars with a telescope
'
output:
url: >-
28258044.jpeg
- text: 'cinematic scene depicting a thunderbirds puppet Hello Kitty wearing Renaissance dress while laughing while eating ice cream, night, dramatic lighting, ultra sharp image with high level of detail
'
output:
url: >-
28254764.jpeg
- text: 'frustrated iron man as a thunderbirds puppet, wearing the iron man suit, in the laundry room, holding a green colored shirt, on the washer is a laundry detergent packaging with the hulk as a puppet,
'
output:
url: >-
28254715.jpeg
- text: 'close-up on the face of thunderbirds puppet wizard whitebeard laughing, with white long beard and big eyebrows, a loud guffaw with an open mouth, his eyes are closed tight and pool of tears are coming,
'
output:
url: >-
28255123.jpeg
- text: 'girl with a pearl earring by johannes vermeer, thunderbirds puppet'
output:
url: >-
28254662.jpeg
- text: 'the mona lisa by leonado davinci, thunderbirds puppet'
output:
url: >-
28254654.jpeg
- text: 'cinematic scene depicting a thunderbirds puppet Jon Snow riding a mechanical horse, late-morning, dramatic lighting, ultra sharp image with high level of detail
'
output:
url: >-
28255038.jpeg
- text: 'cinematic scene depicting a thunderbirds puppet Sandman wearing Mummy wrappings while Taking a lunch break, nightfall, dramatic lighting, ultra sharp image with high level of detail'
output:
url: >-
28254740.jpeg
- text: 'cinematic scene depicting a thunderbirds puppet Barbie wearing Ancient Roman toga while Waking up in a strange place, midnight, dramatic lighting, ultra sharp image with high level of detail
'
output:
url: >-
28254783.jpeg
- text: 'cinematic scene depicting a thunderbirds puppet man wearing 1950s greaser outfit while falling asleep under the stars, midnight, dramatic lighting, ultra sharp image with high level of detail
'
output:
url: >-
28254792.jpeg
- text: 'thunderbird puppet luke skywalker reading a magazine while sitting on the toilet, title of the magazine reads "Father-figures", and features darth vader on the cover,
'
output:
url: >-
28255081.jpeg
- text: 'antique fortune teller machine at the amusement park, the machine has an animatronic thunderbird puppet wizard whitebeard inside, neon light sign text reads "Wizard Speaks", on a red metal plaquard below: "Make Your Wish - 25 Buzz"'
output:
url: >-
28919973.jpeg
- text: 'tivoli fortune-telling machine, the automaton machine has a thunderbird puppet wizard whitebeard mannequin inside, neon light sign text reads "Wizard Speaks" and below "Make A Wish: 25 buzz"
'
output:
url: >-
28920071.jpeg
---
# Wizard's Scrap Yard: Supermarionation Puppets
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p><strong><span style="color:#fd7e14">Welcome to the Wizard's Scrap Yard</span></strong><br />Browse around and see if you find something you like. Mostly puppets here atm <br /></p><p>T H U N D E R B I R D S : <u><br /></u>A classic British puppet sci-fi TV-show from the 60s, which was the inspiration of Team America: World Police. The puppets were created by Sylvia Anderson who co-created the show with her husband Gerry Anderson.</p><p><span style="color:rgb(193, 194, 197)">Trigger: </span><code>Thunderbirds Puppet</code><span style="color:rgb(193, 194, 197)"> or simply </span><code>puppet</code></p><p>--</p><p>S K R O T N I S S E : <u><br /></u>Swedish puppet-animated series The Tale of Skrotnisse and His Friends (1985), a <br />LoRA that produces characters with exaggerated facial features.</p><p><span style="color:rgb(193, 194, 197)">Trigger: </span><code>puppet,</code><span style="color:rgb(193, 194, 197)"> can help.</span></p><p><span style="color:rgb(193, 194, 197)">LoRA strength between 0.7-1.2 </span><br /><a target="_blank" rel="ugc" href="http://Flux1.Dev"><span style="color:rgb(193, 194, 197)">Flux1.Dev</span></a><span style="color:rgb(193, 194, 197)"> Base seems to produce best output for me</span></p><p>Trained on screengrabs from tv-show based on the books by Jan Lööf. A show that has traumatized an entire generation of Swedes with its puppets' uncanny appearance.</p><p>• <span style="color:rgb(193, 194, 197)">45 low quality screengrabs trained at 512px res</span><br />• Minimal captions; <code>puppet, charactername</code><span style="color:rgb(193, 194, 197)"> </span><br />• <span style="color:rgb(193, 194, 197)">minSNRgamma: 2 and </span><span style="color:oklab(0.899401 -0.00192499 -0.00481987)">unetLR: 0.00035</span><br />• Final epoch was at only 675 steps</p><p></p><hr /><p></p><p>Sample of the original shows:</p><div data-youtube-video><iframe width="640" height="480" allowfullscreen="true" autoplay="false" disablekbcontrols="false" enableiframeapi="false" endtime="0" ivloadpolicy="0" loop="false" modestbranding="false" origin playlist src="https://www.youtube.com/embed/1ESV9uw_Fkk" start="0"></iframe></div><p></p><p></p><div data-youtube-video><iframe width="640" height="480" allowfullscreen="true" autoplay="false" disablekbcontrols="false" enableiframeapi="false" endtime="0" ivloadpolicy="0" loop="false" modestbranding="false" origin playlist src="https://www.youtube.com/embed/wLiH4xrCITI" start="0"></iframe></div><p><br /><strong><span style="color:rgb(193, 194, 197)">AI_Noire</span></strong><span style="color:rgb(193, 194, 197)"> created this amazing looking SDXL LoRA on the same concept. </span><br /><span style="color:rgb(193, 194, 197)"> </span><a target="_blank" rel="ugc" href="https://civitai.com/models/625708/skrotnisse-swedish-puppet-show-from-1985-sdxl-v1">https://civitai.com/models/625708/skrotnisse-swedish-puppet-show-from-1985-sdxl-v1</a><br />and </p>
## Trigger words
You should use `Thunderbirds Puppet`, `Puppet` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-scrap-yard-supermarionation-puppets/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-scrap-yard-supermarionation-puppets', weight_name='Wiz-Thunderbird_Puppets.safetensors')
image = pipeline('tivoli fortune-telling machine, the automaton machine has a thunderbird puppet wizard whitebeard mannequin inside, neon light sign text reads "Wizard Speaks" and below "Make A Wish: 25 buzz"
').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF
|
mradermacher
| 2024-10-14T09:49:05Z | 117 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-14T07:55:06Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF
|
mradermacher
| 2024-10-14T09:49:05Z | 15 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-13T11:32:51Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-12B-ArliAI-RPMax-v1.1-GGUF/resolve/main/Mistral-Nemo-12B-ArliAI-RPMax-v1.1.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
WizWhite/wizard-s-horror-library
|
WizWhite
| 2024-10-14T09:49:05Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"horror",
"vintage",
"analog",
"photo",
"style",
"photography",
"portraits",
"romantic",
"1900s",
"20th century",
"pictorialism",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-14T09:49:04Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- horror
- vintage
- analog
- photo
- style
- photography
- portraits
- romantic
- 1900s
- 20th century
- pictorialism
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: by william mortensen
widget:
- text: 'by william mortensen, heist portrait of wizard whitebeard Giving a high-five, pre-code hollywood horror portraiture style'
output:
url: >-
28926101.jpeg
- text: 'darth vader riding a penny farthing high wheel bicycle, by william mortensen, vintage horror theme with a haunting atmosphere, dramatic lighting, creative use of light and shadow play
'
output:
url: >-
28926275.jpeg
- text: 'portrait of batman, by william mortensen'
output:
url: >-
28926874.jpeg
- text: 'by william mortensen, abandoned ghost ship drifting on the misty sea
'
output:
url: >-
28926367.jpeg
- text: 'by william mortensen, space opera portrait of wizard whitebeard riding, pre-code hollywood horror portraiture style
'
output:
url: >-
28926346.jpeg
- text: 'by william mortensen, telenovela portrait of wizard whitebeard Kneeling, pre-code hollywood horror portraiture style
'
output:
url: >-
28926702.jpeg
- text: 'by william mortensen, utopian portrait of wizard whitebeard programming, pre-code hollywood horror portraiture style
'
output:
url: >-
28926739.jpeg
- text: 'by william mortensen, splatter portrait of wizard whitebeard flipping, pre-code hollywood horror portraiture style
'
output:
url: >-
28926783.jpeg
- text: 'by william mortensen, portrait of a witch, she is wearing an intense red lipstick and dark eye shadow, muted and faded colors
'
output:
url: >-
28927614.jpeg
- text: 'color photo portrait of a witch, by william mortensen, she is wearing an intense red lipstick and dark eye shadow
'
output:
url: >-
28927624.jpeg
- text: 'by william mortensen, dramatic extreme close-up portrait of a frisian bene gesserit bride, her face partly obscured by a intricate lace veil.
'
output:
url: >-
28927685.jpeg
- text: 'by william mortensen, dramatic portrait of Captain Kirk yelling, pre-code hollywood horror portraiture style
'
output:
url: >-
28927727.jpeg
- text: 'by william mortensen, dramatic portrait of Darkseid Exercising, pre-code hollywood horror portraiture style
'
output:
url: >-
28927768.jpeg
- text: 'by william mortensen, gentleman werewolf wearing a trilby hat, pre-code hollywood horror portraiture style, small text in the bottom reads "M''lady"
'
output:
url: >-
28927858.jpeg
- text: 'close-up portrait of skeletor with he-man hair, wearing sunglasses, muted colors, by william mortensen
'
output:
url: >-
28929425.jpeg
- text: 'by william mortensen, portrait of yoda
'
output:
url: >-
28946448.jpeg
- text: 'by william mortensen, portrait of chewbacca
'
output:
url: >-
28946525.jpeg
- text: ' '
output:
url: >-
29028653.jpeg
- text: 'portrait of Homer Simpson, by william mortensen'
output:
url: >-
29029416.jpeg
- text: 'by william mortensen, portrait of Homer Simpson'
output:
url: >-
29029374.jpeg
---
# Wizard's Horror Library
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h3 id="wizard's-horror:-william-mortensen-portraiture-951imf1fg"><em><span style="color:#fa5252">Wizard's Horror: </span></em><strong><em><span style="color:#fa5252">William Mortensen Portraiture</span></em></strong></h3><p>Style & Mood LoRA of the pioneering photographer WIlliam Mortensen.<br />Sepia-toned, hazy, horror portraiture portraits. <br />Trigger: <strong>by william mortensen</strong><br />Recommendation is short prompts and play with the CFG.</p><p><em>Please share your creations in the gallery!</em></p>
## Trigger words
You should use `by william mortensen`, `vintage horror`, `ethereal`, `dark and moody` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/WizWhite/wizard-s-horror-library/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('WizWhite/wizard-s-horror-library', weight_name='Wiz-Horror_William_Mortensen.safetensors')
image = pipeline('by william mortensen, portrait of Homer Simpson').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
yee050201/distilgpt2-finetuned-wikitext2
|
yee050201
| 2024-10-14T09:45:23Z | 179 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-14T09:09:25Z |
---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7487 | 1.0 | 2334 | 3.6663 |
| 3.648 | 2.0 | 4668 | 3.6462 |
| 3.6015 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
arjunanand13/florence-enphaseall2-30e
|
arjunanand13
| 2024-10-14T09:45:13Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"image-text-to-text",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-13T11:00:24Z |
---
library_name: transformers
pipeline_tag: image-text-to-text
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andsedov/flat_picture_classifier
|
andsedov
| 2024-10-14T09:45:04Z | 223 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-07-31T14:22:57Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: flat_picture_classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8560311198234558
---
# flat_picture_classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### apartment bathroom

#### apartment bedroom

#### apartment dining room

#### apartment house facade

#### apartment in-house gym

#### apartment indoor pool

#### apartment kitchen

#### apartment living room

#### apartment outdoor pool

#### apartment study room

#### apartment terrace

#### apartment view

#### apartment yard

#### empty room

|
ariG23498/t5-v1_1-xxl-torch
|
ariG23498
| 2024-10-14T09:44:06Z | 44 | 1 | null |
[
"safetensors",
"t5",
"region:us"
] | null | 2024-10-14T09:33:34Z |
```py
checkpoint = "google/t5-v1_1-xxl"
model = T5EncoderModel.from_pretrained(
checkpoint, torch_dtype=torch.bfloat16
)
tokenizer = T5Tokenizer.from_pretrained(
checkpoint
)
model.push_to_hub("ariG23498/t5-v1_1-xxl-torch")
tokenizer.push_to_hub("ariG23498/t5-v1_1-xxl-torch")```
|
ariG23498/clip-vit-large-patch14-torch
|
ariG23498
| 2024-10-14T09:43:39Z | 33 | 1 | null |
[
"safetensors",
"clip_text_model",
"region:us"
] | null | 2024-10-14T09:42:21Z |
```py
checkpoint = "openai/clip-vit-large-patch14"
model = CLIPTextModel.from_pretrained(
checkpoint, torch_dtype=torch.bfloat16
)
tokenizer = CLIPTokenizer.from_pretrained(
checkpoint
)
model.push_to_hub("ariG23498/clip-vit-large-patch14-torch")
tokenizer.push_to_hub("ariG23498/clip-vit-large-patch14-torch")
```
|
arjunanand13/florence-enphaseall2-25e
|
arjunanand13
| 2024-10-14T09:42:32Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-13T08:05:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s-nlp/russian_toxicity_classifier
|
s-nlp
| 2024-10-14T09:39:39Z | 10,750 | 38 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"toxic comments classification",
"ru",
"base_model:DeepPavlov/rubert-base-cased-conversational",
"base_model:finetune:DeepPavlov/rubert-base-cased-conversational",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- ru
tags:
- toxic comments classification
licenses:
- cc-by-nc-sa
license: openrail++
base_model:
- DeepPavlov/rubert-base-cased-conversational
---
Bert-based classifier (finetuned from [Conversational Rubert](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)) trained on merge of Russian Language Toxic Comments [dataset](https://www.kaggle.com/blackmoon/russian-language-toxic-comments/metadata) collected from 2ch.hk and Toxic Russian Comments [dataset](https://www.kaggle.com/alexandersemiletov/toxic-russian-comments) collected from ok.ru.
The datasets were merged, shuffled, and split into train, dev, test splits in 80-10-10 proportion.
The metrics obtained from test dataset is as follows
| | precision | recall | f1-score | support |
|:------------:|:---------:|:------:|:--------:|:-------:|
| 0 | 0.98 | 0.99 | 0.98 | 21384 |
| 1 | 0.94 | 0.92 | 0.93 | 4886 |
| accuracy | | | 0.97 | 26270|
| macro avg | 0.96 | 0.96 | 0.96 | 26270 |
| weighted avg | 0.97 | 0.97 | 0.97 | 26270 |
## How to use
```python
from transformers import BertTokenizer, BertForSequenceClassification
# load tokenizer and model weights
tokenizer = BertTokenizer.from_pretrained('s-nlp/russian_toxicity_classifier')
model = BertForSequenceClassification.from_pretrained('s-nlp/russian_toxicity_classifier')
# prepare the input
batch = tokenizer.encode('ты супер', return_tensors='pt')
# inference
model(batch)
```
## Citation
To acknowledge our work, please, use the corresponding citation:
```
@article{dementieva2022russe,
title={RUSSE-2022: Findings of the First Russian Detoxification Shared Task Based on Parallel Corpora},
author={Dementieva, Daryna and Logacheva, Varvara and Nikishina, Irina and Fenogenova, Alena and Dale, David and Krotova, Irina and Semenov, Nikita and Shavrina, Tatiana and Panchenko, Alexander}
}
```
## Licensing Information
This model is licensed under the OpenRAIL++ License, which supports the development of various technologies—both industrial and academic—that serve the public good.
|
oksjjj/xlm-roberta-base-finetuned-panx-all
|
oksjjj
| 2024-10-14T09:39:00Z | 125 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-14T09:21:46Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- F1: 0.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.299 | 1.0 | 835 | 0.2074 | 0.8078 |
| 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 |
| 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
nick911/hermes-3-llama-3-1-405b-fp8
|
nick911
| 2024-10-14T09:36:01Z | 5 | 0 | null |
[
"safetensors",
"llama",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"en",
"arxiv:2408.11857",
"base_model:meta-llama/Llama-3.1-405B",
"base_model:finetune:meta-llama/Llama-3.1-405B",
"license:llama3",
"region:us"
] | null | 2024-10-14T09:36:00Z |
---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
base_model: meta-llama/Meta-Llama-3.1-405B
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
model-index:
- name: Hermes-3-Llama-3.1-405B
results: []
---
# Hermes 3 - Llama-3.1 405B

## Model Description
Hermes 3 405B is the latest flagship model in the Hermes series of LLMs by Nous Research, and the first full parameter finetune since the release of Llama-3.1 405B.
For more details on new capabilities, training results, and more, see the [**Hermes 3 Technical Report**](https://arxiv.org/abs/2408.11857).
**This is the FP8 quantized version of Hermes 405B, for use with [vLLM](https://github.com/vllm-project/vllm). For the Full HF BF16 Model, click [here](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B)**
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.
Hermes 3 405B is a frontier level, full parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.
The Hermes 3 series builds and expands on the Hermes 3 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
Special thanks to **LambdaLabs** for their 1-Click Cluster access provided to us to train this model! [Check out LambdaLabs' 1CC Here](https://lambdalabs.com/service/gpu-cloud/1-click-clusters).
# Benchmarks
Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two.
Full benchmark comparisons below:

# Prompt Format
Hermes 3 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are Hermes 3, a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 3, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 3."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Inference
The Hermes 405B model requires over 800GB of VRAM to load in FP16, to remedy this, we have utilized NeuralMagic's FP8 quantization method to provide a pre-quantized model that fits only 430~GB of VRAM, and is compatible with the `VLLM` inference engine.
You can also load this FP16 model in `bitsandbytes` 8bit or 4bit with bitsandbytes using HuggingFace Transformers (not recommended, as it is slower), by setting load_in_4bit or 8bit like so:
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-3-Llama-3.1-405B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-3-Llama-3.1-405B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

## Quantized Versions:
NeuralMagic FP8 Quantization (for use with VLLM): https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-405B-FP8
# How to cite:
```bibtext
@misc{teknium2024hermes3technicalreport,
title={Hermes 3 Technical Report},
author={Ryan Teknium and Jeffrey Quesnelle and Chen Guang},
year={2024},
eprint={2408.11857},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11857},
}
```
|
BlackBeenie/llama-3-luminous-merged
|
BlackBeenie
| 2024-10-14T09:34:22Z | 8 | 0 | null |
[
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"nbeerbower/llama-3-stella-8B",
"defog/llama-3-sqlcoder-8b",
"nbeerbower/llama-3-gutenberg-8B",
"openchat/openchat-3.6-8b-20240522",
"Kukedlc/NeuralLLaMa-3-8b-DT-v0.1",
"cstr/llama3-8b-spaetzle-v20",
"mlabonne/ChimeraLlama-3-8B-v3",
"flammenai/Mahou-1.1-llama3-8B",
"KingNish/KingNish-Llama3-8b",
"base_model:Kukedlc/NeuralLLaMa-3-8b-DT-v0.1",
"base_model:merge:Kukedlc/NeuralLLaMa-3-8b-DT-v0.1",
"base_model:cstr/llama3-8b-spaetzle-v20",
"base_model:merge:cstr/llama3-8b-spaetzle-v20",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:merge:defog/llama-3-sqlcoder-8b",
"base_model:flammenai/Mahou-1.1-llama3-8B",
"base_model:merge:flammenai/Mahou-1.1-llama3-8B",
"base_model:mlabonne/ChimeraLlama-3-8B-v3",
"base_model:merge:mlabonne/ChimeraLlama-3-8B-v3",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:merge:nbeerbower/llama-3-gutenberg-8B",
"base_model:nbeerbower/llama-3-stella-8B",
"base_model:merge:nbeerbower/llama-3-stella-8B",
"base_model:openchat/openchat-3.6-8b-20240522",
"base_model:merge:openchat/openchat-3.6-8b-20240522",
"model-index",
"region:us"
] | null | 2024-09-15T13:04:06Z |
---
tags:
- merge
- mergekit
- lazymergekit
- nbeerbower/llama-3-stella-8B
- defog/llama-3-sqlcoder-8b
- nbeerbower/llama-3-gutenberg-8B
- openchat/openchat-3.6-8b-20240522
- Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
- cstr/llama3-8b-spaetzle-v20
- mlabonne/ChimeraLlama-3-8B-v3
- flammenai/Mahou-1.1-llama3-8B
- KingNish/KingNish-Llama3-8b
base_model:
- nbeerbower/llama-3-stella-8B
- defog/llama-3-sqlcoder-8b
- nbeerbower/llama-3-gutenberg-8B
- openchat/openchat-3.6-8b-20240522
- Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
- cstr/llama3-8b-spaetzle-v20
- mlabonne/ChimeraLlama-3-8B-v3
- flammenai/Mahou-1.1-llama3-8B
- KingNish/KingNish-Llama3-8b
model-index:
- name: llama-3-luminous-merged
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 43.23
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 7.85
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.7
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.63
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 30.81
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/llama-3-luminous-merged
name: Open LLM Leaderboard
---
# llama-3-luminous-merged
llama-3-luminous-merged is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [nbeerbower/llama-3-stella-8B](https://huggingface.co/nbeerbower/llama-3-stella-8B)
* [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)
* [Kukedlc/NeuralLLaMa-3-8b-DT-v0.1](https://huggingface.co/Kukedlc/NeuralLLaMa-3-8b-DT-v0.1)
* [cstr/llama3-8b-spaetzle-v20](https://huggingface.co/cstr/llama3-8b-spaetzle-v20)
* [mlabonne/ChimeraLlama-3-8B-v3](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3)
* [flammenai/Mahou-1.1-llama3-8B](https://huggingface.co/flammenai/Mahou-1.1-llama3-8B)
* [KingNish/KingNish-Llama3-8b](https://huggingface.co/KingNish/KingNish-Llama3-8b)
## 🧩 Configuration
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# No parameters necessary for base model
- model: nbeerbower/llama-3-stella-8B
parameters:
density: 0.6
weight: 0.16
- model: defog/llama-3-sqlcoder-8b
parameters:
density: 0.56
weight: 0.1
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.6
weight: 0.18
- model: openchat/openchat-3.6-8b-20240522
parameters:
density: 0.56
weight: 0.13
- model: Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
parameters:
density: 0.58
weight: 0.18
- model: cstr/llama3-8b-spaetzle-v20
parameters:
density: 0.56
weight: 0.08
- model: mlabonne/ChimeraLlama-3-8B-v3
parameters:
density: 0.56
weight: 0.07
- model: flammenai/Mahou-1.1-llama3-8B
parameters:
density: 0.55
weight: 0.05
- model: KingNish/KingNish-Llama3-8b
parameters:
density: 0.55
weight: 0.05
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "BlackBeenie/llama-3-luminous-merged"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BlackBeenie__llama-3-luminous-merged)
| Metric |Value|
|-------------------|----:|
|Avg. |21.48|
|IFEval (0-Shot) |43.23|
|BBH (3-Shot) |30.64|
|MATH Lvl 5 (4-Shot)| 7.85|
|GPQA (0-shot) | 5.70|
|MuSR (0-shot) |10.63|
|MMLU-PRO (5-shot) |30.81|
|
s-nlp/bart-base-detox
|
s-nlp
| 2024-10-14T09:34:02Z | 242 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"detoxification",
"en",
"dataset:s-nlp/paradetox",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- detoxification
licenses:
- cc-by-nc-sa
license: openrail++
datasets:
- s-nlp/paradetox
base_model:
- facebook/bart-base
---
**Model Overview**
This is the model presented in the paper ["ParaDetox: Detoxification with Parallel Data"](https://aclanthology.org/2022.acl-long.469/).
The model itself is [BART (base)](https://huggingface.co/facebook/bart-base) model trained on parallel detoxification dataset ParaDetox achiving SOTA results for detoxification task. More details, code and data can be found [here](https://github.com/skoltech-nlp/paradetox).
**How to use**
```python
from transformers import BartForConditionalGeneration, AutoTokenizer
base_model_name = 'facebook/bart-base'
model_name = 's-nlp/bart-base-detox'
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer.encode('This is completely idiotic!', return_tensors='pt')
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
# This is unwise!
```
**Citation**
```
@inproceedings{logacheva-etal-2022-paradetox,
title = "{P}ara{D}etox: Detoxification with Parallel Data",
author = "Logacheva, Varvara and
Dementieva, Daryna and
Ustyantsev, Sergey and
Moskovskiy, Daniil and
Dale, David and
Krotova, Irina and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.469",
pages = "6804--6818",
abstract = "We present a novel pipeline for the collection of parallel data for the detoxification task. We collect non-toxic paraphrases for over 10,000 English toxic sentences. We also show that this pipeline can be used to distill a large existing corpus of paraphrases to get toxic-neutral sentence pairs. We release two parallel corpora which can be used for the training of detoxification models. To the best of our knowledge, these are the first parallel datasets for this task.We describe our pipeline in detail to make it fast to set up for a new language or domain, thus contributing to faster and easier development of new parallel resources.We train several detoxification models on the collected data and compare them with several baselines and state-of-the-art unsupervised approaches. We conduct both automatic and manual evaluations. All models trained on parallel data outperform the state-of-the-art unsupervised models by a large margin. This suggests that our novel datasets can boost the performance of detoxification systems.",
}
```
**License**
This model is licensed under the OpenRAIL++ License, which supports the development of various technologies—both industrial and academic—that serve the public good.
|
mradermacher/GEMMA2-9b-Pollux-exp-GGUF
|
mradermacher
| 2024-10-14T09:22:19Z | 23 | 1 |
transformers
|
[
"transformers",
"gguf",
"ja",
"en",
"dataset:Digirise-ai/logical_data",
"base_model:Digirise-ai/GEMMA2-9b-Pollux-exp",
"base_model:quantized:Digirise-ai/GEMMA2-9b-Pollux-exp",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T08:20:42Z |
---
base_model: Digirise-ai/GEMMA2-9b-Pollux-exp
datasets:
- Digirise-ai/logical_data
language:
- ja
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Digirise-ai/GEMMA2-9b-Pollux-exp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GEMMA2-9b-Pollux-exp-GGUF/resolve/main/GEMMA2-9b-Pollux-exp.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adriansanz/ST-tramits-SB-001-5ep
|
adriansanz
| 2024-10-14T09:09:56Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2372",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-14T09:07:59Z |
---
base_model: BAAI/bge-m3
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2372
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Heu de veure si és necessari un estudi d'aïllament acústic i quin
nivell d'aïllament acústic precisa l'activitat.
sentences:
- Quin és el paper de les persones que resideixen amb el titular del dret d'habitatge
en la política d'habitatge?
- Quin és el límit de superfície per a les carpes informatives?
- Quin és l'objectiu de l'estudi d'aïllament acústic?
- source_sentence: 'Si us voleu matricular al proper curs 2022-2023 d''arts plàstiques
ho podeu fer a partir del 1 de juliol a les 16h, seleccionant una d''aquestes
opcions:'
sentences:
- Quin és el període de matrícula per al curs 2022-2023 d'arts plàstiques?
- Quan no cal presentar al·legacions en un expedient de baixa d'ofici?
- Quin és l'objectiu de les al·legacions respecte a un expedient sancionador de
l'Ordenança Municipal de Civisme i Convivència Ciutadana?
- source_sentence: Annexes Econòmics (Cooperació)
sentences:
- Qui és el responsable de l'elaboració de l'informe d'adequació de l'habitatge?
- Què han de fer les persones interessades durant el tràmit d'audiència en el procés
d'inclusió al registre municipal d'immobles desocupats?
- Quin és l'àmbit de la cooperació econòmica?
- source_sentence: En virtut del conveni de col.laboració amb l'Atrium de Viladecans,
tots els ciutadans que acreditin la seva residència a Viladecans es podran beneficiar
d'un 20% de descompte en la programació de teatre, música i dansa, objecte del
conveni.
sentences:
- Quin és el resultat de consultar un expedient d'activitats?
- Quin és el format de resposta d'aquesta sol·licitud?
- Quin és el descompte que s'aplica en la programació de teatre, música i dansa
per als ciutadans de Viladecans?
- source_sentence: Descripció. Retorna en format JSON adequat
sentences:
- Quin és el contingut de l'annex específic?
- Quin tipus d'ocupació es refereix a la renúncia de la llicència?
- Què passa amb l'habitatge?
model-index:
- name: SentenceTransformer based on BAAI/bge-m3
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 1024
type: dim_1024
metrics:
- type: cosine_accuracy@1
value: 0.33220910623946037
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5902192242833052
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6998313659359191
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8094435075885329
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.33220910623946037
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1967397414277684
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1399662731871838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08094435075885327
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.33220910623946037
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5902192242833052
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6998313659359191
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8094435075885329
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5625986746470664
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4843170320404718
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49243646079034575
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.3406408094435076
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5767284991568297
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6981450252951096
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8161888701517707
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3406408094435076
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19224283305227655
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1396290050590219
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08161888701517706
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3406408094435076
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5767284991568297
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6981450252951096
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8161888701517707
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5661348054508011
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4872065633448428
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49520736709122076
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.3305227655986509
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5801011804384486
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6947723440134908
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8161888701517707
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3305227655986509
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19336706014614952
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.13895446880269813
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08161888701517707
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3305227655986509
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5801011804384486
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6947723440134908
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8161888701517707
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5629643418278626
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4829913809256133
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49079988310494693
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.3288364249578415
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5885328836424958
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7015177065767285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8094435075885329
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3288364249578415
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1961776278808319
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14030354131534567
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08094435075885327
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3288364249578415
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5885328836424958
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7015177065767285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8094435075885329
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5625842077927447
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.48416981182579805
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49201787335851555
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.3473861720067454
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.581787521079258
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6998313659359191
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.806070826306914
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3473861720067454
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19392917369308602
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1399662731871838
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0806070826306914
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3473861720067454
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.581787521079258
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6998313659359191
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.806070826306914
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.565365572327355
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4893626703070211
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.49726527073459287
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.2917369308600337
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5682967959527825
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6644182124789207
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7875210792580101
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2917369308600337
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18943226531759413
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.13288364249578413
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07875210792580102
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2917369308600337
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5682967959527825
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6644182124789207
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7875210792580101
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5320349463938843
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.45117106988945077
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.45948574441166834
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/ST-tramits-SB-001-5ep")
# Run inference
sentences = [
'Descripció. Retorna en format JSON adequat',
"Quin és el contingut de l'annex específic?",
"Què passa amb l'habitatge?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_1024`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3322 |
| cosine_accuracy@3 | 0.5902 |
| cosine_accuracy@5 | 0.6998 |
| cosine_accuracy@10 | 0.8094 |
| cosine_precision@1 | 0.3322 |
| cosine_precision@3 | 0.1967 |
| cosine_precision@5 | 0.14 |
| cosine_precision@10 | 0.0809 |
| cosine_recall@1 | 0.3322 |
| cosine_recall@3 | 0.5902 |
| cosine_recall@5 | 0.6998 |
| cosine_recall@10 | 0.8094 |
| cosine_ndcg@10 | 0.5626 |
| cosine_mrr@10 | 0.4843 |
| **cosine_map@100** | **0.4924** |
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3406 |
| cosine_accuracy@3 | 0.5767 |
| cosine_accuracy@5 | 0.6981 |
| cosine_accuracy@10 | 0.8162 |
| cosine_precision@1 | 0.3406 |
| cosine_precision@3 | 0.1922 |
| cosine_precision@5 | 0.1396 |
| cosine_precision@10 | 0.0816 |
| cosine_recall@1 | 0.3406 |
| cosine_recall@3 | 0.5767 |
| cosine_recall@5 | 0.6981 |
| cosine_recall@10 | 0.8162 |
| cosine_ndcg@10 | 0.5661 |
| cosine_mrr@10 | 0.4872 |
| **cosine_map@100** | **0.4952** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3305 |
| cosine_accuracy@3 | 0.5801 |
| cosine_accuracy@5 | 0.6948 |
| cosine_accuracy@10 | 0.8162 |
| cosine_precision@1 | 0.3305 |
| cosine_precision@3 | 0.1934 |
| cosine_precision@5 | 0.139 |
| cosine_precision@10 | 0.0816 |
| cosine_recall@1 | 0.3305 |
| cosine_recall@3 | 0.5801 |
| cosine_recall@5 | 0.6948 |
| cosine_recall@10 | 0.8162 |
| cosine_ndcg@10 | 0.563 |
| cosine_mrr@10 | 0.483 |
| **cosine_map@100** | **0.4908** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.3288 |
| cosine_accuracy@3 | 0.5885 |
| cosine_accuracy@5 | 0.7015 |
| cosine_accuracy@10 | 0.8094 |
| cosine_precision@1 | 0.3288 |
| cosine_precision@3 | 0.1962 |
| cosine_precision@5 | 0.1403 |
| cosine_precision@10 | 0.0809 |
| cosine_recall@1 | 0.3288 |
| cosine_recall@3 | 0.5885 |
| cosine_recall@5 | 0.7015 |
| cosine_recall@10 | 0.8094 |
| cosine_ndcg@10 | 0.5626 |
| cosine_mrr@10 | 0.4842 |
| **cosine_map@100** | **0.492** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3474 |
| cosine_accuracy@3 | 0.5818 |
| cosine_accuracy@5 | 0.6998 |
| cosine_accuracy@10 | 0.8061 |
| cosine_precision@1 | 0.3474 |
| cosine_precision@3 | 0.1939 |
| cosine_precision@5 | 0.14 |
| cosine_precision@10 | 0.0806 |
| cosine_recall@1 | 0.3474 |
| cosine_recall@3 | 0.5818 |
| cosine_recall@5 | 0.6998 |
| cosine_recall@10 | 0.8061 |
| cosine_ndcg@10 | 0.5654 |
| cosine_mrr@10 | 0.4894 |
| **cosine_map@100** | **0.4973** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2917 |
| cosine_accuracy@3 | 0.5683 |
| cosine_accuracy@5 | 0.6644 |
| cosine_accuracy@10 | 0.7875 |
| cosine_precision@1 | 0.2917 |
| cosine_precision@3 | 0.1894 |
| cosine_precision@5 | 0.1329 |
| cosine_precision@10 | 0.0788 |
| cosine_recall@1 | 0.2917 |
| cosine_recall@3 | 0.5683 |
| cosine_recall@5 | 0.6644 |
| cosine_recall@10 | 0.7875 |
| cosine_ndcg@10 | 0.532 |
| cosine_mrr@10 | 0.4512 |
| **cosine_map@100** | **0.4595** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 2,372 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 35.12 tokens</li><li>max: 166 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 19.49 tokens</li><li>max: 47 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|
| <code>Comunicar la variació d'alguna de les següents dades del Padró Municipal d'Habitants: Nom, Cognoms, Data de naixement, DNI, Passaport, Número de permís de residència (NIE), Sexe, Municipi i/o província de naixement, Nacionalitat, Titulació acadèmica.</code> | <code>Quin és l'objectiu del canvi de dades personals en el Padró Municipal d'Habitants?</code> |
| <code>EN QUÈ CONSISTEIX: Tramitar la sol·licitud de matrimoni civil a l'Ajuntament.</code> | <code>Què és el matrimoni civil a l'Ajuntament de Sant Boi de Llobregat?</code> |
| <code>En domiciliar el pagament de tributs municipals en entitats bancàries.</code> | <code>Quin és el benefici de domiciliar el pagament de tributs?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.2
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_1024_cosine_map@100 | dim_768_cosine_map@100 | dim_512_cosine_map@100 | dim_256_cosine_map@100 | dim_128_cosine_map@100 | dim_64_cosine_map@100 |
|:----------:|:------:|:-------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.9664 | 9 | - | 0.4730 | 0.4766 | 0.4640 | 0.4612 | 0.4456 | 0.4083 |
| 1.0738 | 10 | 2.6023 | - | - | - | - | - | - |
| 1.9329 | 18 | - | 0.4951 | 0.4966 | 0.4977 | 0.4773 | 0.4849 | 0.4501 |
| 2.1477 | 20 | 0.974 | - | - | - | - | - | - |
| 2.8993 | 27 | - | 0.4891 | 0.4973 | 0.4941 | 0.4867 | 0.4925 | 0.4684 |
| 3.2215 | 30 | 0.408 | - | - | - | - | - | - |
| **3.9732** | **37** | **-** | **0.4944** | **0.4998** | **0.4931** | **0.4991** | **0.4974** | **0.4616** |
| 4.2953 | 40 | 0.2718 | - | - | - | - | - | - |
| 4.8322 | 45 | - | 0.4924 | 0.4952 | 0.4908 | 0.4920 | 0.4973 | 0.4595 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 1.1.0.dev0
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
QuantFactory/Odins-Allseeing-Eye-9B-GGUF
|
QuantFactory
| 2024-10-14T09:01:56Z | 73 | 3 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:Delta-Vector/Odin-9B",
"base_model:merge:Delta-Vector/Odin-9B",
"base_model:lemon07r/Gemma-2-Ataraxy-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-14T08:03:55Z |
---
base_model:
- lemon07r/Gemma-2-Ataraxy-9B
- Delta-Vector/Odin-9B
library_name: transformers
tags:
- mergekit
- merge
---
[](https://hf.co/QuantFactory)
# QuantFactory/Odins-Allseeing-Eye-9B-GGUF
This is quantized version of [Delta-Vector/Odins-Allseeing-Eye-9B](https://huggingface.co/Delta-Vector/Odins-Allseeing-Eye-9B) created using llama.cpp
# Original Model Card
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B)
* [Delta-Vector/Odin-9B](https://huggingface.co/Delta-Vector/Odin-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Delta-Vector/Odin-9B
- model: lemon07r/Gemma-2-Ataraxy-9B
merge_method: slerp
base_model: Delta-Vector/Odin-9B
parameters:
t:
- value: 0.9
dtype: bfloat16
```
|
XLabs-AI/flux-controlnet-hed-diffusers
|
XLabs-AI
| 2024-10-14T08:52:36Z | 447 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"controlnet",
"Flux",
"image-generation",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-10T16:36:44Z |
---
license: other
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
tags:
- diffusers
- controlnet
- Flux
- image-generation
---
# Description
This repository provides a Diffusers version of FLUX.1-dev Hed ControlNet checkpoint by Xlabs AI, [original repo](https://huggingface.co/XLabs-AI/flux-controlnet-hed-v3).

# How to use
This model can be used directly with the diffusers library
```
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetModel
from diffusers.pipelines import FluxControlNetPipeline
from PIL import Image
import numpy as np
generator = torch.Generator(device="cuda").manual_seed(87544357)
controlnet = FluxControlNetModel.from_pretrained(
"Xlabs-AI/flux-controlnet-hed-diffusers",
torch_dtype=torch.bfloat16,
use_safetensors=True,
)
pipe = FluxControlNetPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
controlnet=controlnet,
torch_dtype=torch.bfloat16
)
pipe.to("cuda")
control_image = load_image("https://huggingface.co/Xlabs-AI/flux-controlnet-hed-diffusers/resolve/main/hed_example.png")
prompt = "photo of woman in the cyberpank city"
image = pipe(
prompt,
control_image=control_image,
controlnet_conditioning_scale=0.7,
num_inference_steps=25,
guidance_scale=3.5,
height=1376,
width=1024,
generator=generator,
num_images_per_prompt=1,
).images[0]
image.save("output_test_controlnet.png")
```
## License
Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/>
|
XLabs-AI/flux-controlnet-depth-diffusers
|
XLabs-AI
| 2024-10-14T08:52:04Z | 316 | 8 |
diffusers
|
[
"diffusers",
"safetensors",
"controlnet",
"Flux",
"image-generation",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-10T16:30:08Z |
---
license: other
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
tags:
- diffusers
- controlnet
- Flux
- image-generation
---
# Description
This repository provides a Diffusers version of FLUX.1-dev Depth ControlNet checkpoint by Xlabs AI, [original repo](https://huggingface.co/XLabs-AI/flux-controlnet-depth-v3).

# How to use
This model can be used directly with the diffusers library
```
import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetModel
from diffusers.pipelines import FluxControlNetPipeline
from PIL import Image
import numpy as np
generator = torch.Generator(device="cuda").manual_seed(87544357)
controlnet = FluxControlNetModel.from_pretrained(
"Xlabs-AI/flux-controlnet-depth-diffusers",
torch_dtype=torch.bfloat16,
use_safetensors=True,
)
pipe = FluxControlNetPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
controlnet=controlnet,
torch_dtype=torch.bfloat16
)
pipe.to("cuda")
control_image = load_image("https://huggingface.co/Xlabs-AI/flux-controlnet-depth-diffusers/resolve/main/depth_example.png")
prompt = "photo of fashion woman in the street"
image = pipe(
prompt,
control_image=control_image,
controlnet_conditioning_scale=0.7,
num_inference_steps=25,
guidance_scale=3.5,
height=768,
width=1024,
generator=generator,
num_images_per_prompt=1,
).images[0]
image.save("output_test_controlnet.png")
```
## License
Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/>
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.