modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-15 00:44:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 557
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-15 00:44:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
leap-llm/Meta-Llama-3-8B-Instruct-sft-alfworld-webshop-intercode-iter1
|
leap-llm
| 2024-11-20T16:21:42Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T16:10:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/w601sxs_-_b1ade-1b-bf16-awq
|
RichardErkhov
| 2024-11-20T16:21:10Z | 5 | 0 | null |
[
"safetensors",
"gpt_neox",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T16:20:19Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
b1ade-1b-bf16 - AWQ
- Model creator: https://huggingface.co/w601sxs/
- Original model: https://huggingface.co/w601sxs/b1ade-1b-bf16/
Original model description:
---
library_name: transformers
datasets:
- kaist-ai/CoT-Collection
---
# Model Card for b1ade-1b
Instruction fine tuned 1B parameter model; pass in:
1. `context: <...>`
2. `question: <...>`
and expect an `answer: <...>`
See implemetation example below (also see https://huggingface.co/spaces/w601sxs/b1ade-1b):
```
import torch
import transformers
import os, time
import tempfile
from transformers import AutoTokenizer, AutoModelForCausalLM
BASE_MODEL = "w601sxs/b1ade-1b-bf16"
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
model = AutoModelForCausalLM.from_pretrained(BASE_MODEL,
torch_dtype=torch.bfloat16,
device_map="auto",
offload_folder="offload")
model.eval()
from transformers import StoppingCriteria, AutoModelForCausalLM, AutoTokenizer, StoppingCriteriaList
class KeywordsStoppingCriteria(StoppingCriteria):
def __init__(self, keywords_ids:list):
self.keywords = keywords_ids
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
if input_ids[0][-1] in self.keywords:
return True
return False
stop_words = ['>', ' >','> ']
stop_ids = [tokenizer.encode(w)[0] for w in stop_words]
stop_criteria = StoppingCriteriaList([KeywordsStoppingCriteria(keywords_ids = stop_ids)])
def predict(text):
inputs = tokenizer(text, return_tensors="pt").to('cuda')
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=128, stopping_criteria=stop_criteria)
out_text = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0].split("answer:")[-1]
return print(out_text.split(text)[-1])
predict("context: <The center contact of the bulb typically connects to the medium-power filament, and the ring connects to the low-power filament. Thus, if a 3-way bulb is screwed into a standard light socket that has only a center contact, only the medium-power filament operates. In the case of the 50 W / 100 W / 150 W bulb, putting this bulb in a regular lamp socket will result in it behaving like a normal 100W bulb.>\n question: <Question: Do 3 way light bulbs work in any lamp?>\n")
```
|
mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF
|
mradermacher
| 2024-11-20T16:21:04Z | 23 | 1 |
transformers
|
[
"transformers",
"gguf",
"merge",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-19T01:56:20Z |
---
base_model: newsbang/Homer-v0.4-Qwen2.5-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/newsbang/Homer-v0.4-Qwen2.5-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Homer-v0.4-Qwen2.5-7B-GGUF/resolve/main/Homer-v0.4-Qwen2.5-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Copus-2x8B-GGUF
|
mradermacher
| 2024-11-20T16:19:16Z | 18 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:lodrick-the-lafted/Copus-2x8B",
"base_model:quantized:lodrick-the-lafted/Copus-2x8B",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-20T15:44:19Z |
---
base_model: lodrick-the-lafted/Copus-2x8B
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/lodrick-the-lafted/Copus-2x8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Copus-2x8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q2_K.gguf) | Q2_K | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q3_K_S.gguf) | Q3_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q3_K_L.gguf) | Q3_K_L | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.IQ4_XS.gguf) | IQ4_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q5_K_S.gguf) | Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q5_K_M.gguf) | Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q6_K.gguf) | Q6_K | 11.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Copus-2x8B-GGUF/resolve/main/Copus-2x8B.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mini1013/master_cate_lh14
|
mini1013
| 2024-11-20T16:18:23Z | 241 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T16:17:58Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ์ธ์ฒด๋ชจํ ๊ต์ก์ฉ ํด๋ถ ํด๋ถํ ๋ง๋คํน ํด๊ณจ ๋ผ ์
๊ทธ๋ ์ด๋๋๋ฒ์ 62CM์์ํฐ์๋จ์ฑ๋ชจ๋ธ์ฐ๋ฅผ์์์ ์์ค์์ด์ปดํผ๋
- text: ๊ณ ๊ธ ํจ๋ธ๋ฆญ์ ๋ฆฌํจ ๋ฆฌ๋น๋ฐ์ค ํธ์์ปค๋ฒ ์ํ ๊ธฐ์79
- text: ๊ฑฐ์น๋ ๋ํ ์ ์ ์ง์ด ํ๊ถ๋ ํธ๋กํผ ๋ฉ๋ฌ ์คํฌ์ธ ๋์คํ๋ ์ด ์ ๋ฐ ๊ฐ๋ก 120 ์ธ๋ก 20์ผํฐ_๋ผ์ดํธ ํ
์ค์ฒ ๋๋ฌดํ์ ์์ํ:์ค์ ๋ผ์ธ
ํ๋ ์ฆ์คํ๋์ค
- text: ํฌ๋ช
์กฐ๋ฆฝ์ ์ ๋ฐ์ฅ ์ ๋ฐ ์ ๋ฆฌ๋ ์๋ฉ์ฅ ๋ณด๊ดํจ ๋ฏผํธ ์ด๋ฆผ๊ณต๋ฐฑ
- text: ๊ณ ์์ด ์ฒ ์ ์ค๋ดํ ์ ๋ฆฌ๋ ์ฌ๋ฆฌํผ ๊ฝ์ด ๊ฑธ์ด ๊ฑฐ์น๋ ํ๊ด ํ์ฅ์ค ํ๊ด ํ๋ ์์ฆ๋ 4๋จ ๊ณ ์์ด ์ค๋ดํ๊ฑฐ์น๋_ํ์ดํธ ํฐ๋น์ค์ง์ปดํผ๋ (TB&G
Co.)
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.937399876771411
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 9 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6.0 | <ul><li>'์์ํ๋ฆฌ๋น ๋ฒ ์ด์ง ํผํผ ๋
ผ์ฌ๋ฆฝ ์ท๊ฑธ์ด 1P ํผํ ์ฃผ์ํ์ฌ์๋ฒ์'</li><li>'[๋ฆฌ๋ฒค์ค] ์ฝํ
๋
ผ์ฌ๋ฆฝ ๋ฐ์ง๊ฑธ์ด 20P / ์๋ชฉ ๋ฐ์ง๊ฑธ์ด ๋ฐ ํ๊ฑฐ G.ํ์ดํธ ์ฑ๊ธ 1๋จ ์ด๋์ํ๊ฑฐ ์์๋ฆฌ๋น'</li><li>'[ ํ๋งค] ์์ด์ง๋ฒ ์คํธ ๋ฐ๋๊ธฐ ๋ด๋งค์ง ํ๊ฑฐ 30p ๋ฐ๋๊ธฐ'</li></ul> |
| 3.0 | <ul><li>'์คํ ์ ํ๊ธฐ์ปค๋ฒ ๋๋ฐฉ๊ธฐ ์จํ๊ธฐ ๋ณด๊ด ๋ฎ๊ฐ ์ปค๋ฒ ๋จผ์ง์ฐจ๋จ ํค๋ํ ์คํ ๋ํ ์คํ์ ํ๊ธฐ์ปค๋ฒ ํค๋ํ_๋ฒ ์ด์ง ๊ตฟ์กฐ์ธ'</li><li>'๋ณด๊ด ์คํ ๋ ์ ํ๊ธฐ ์ปค๋ฒ ๋ฐฉ์ ์ผ๋ฐํ ํค๋ ์ผ์ด์ค ๋ถ์ ๋ฝํ ์๋ฉํ๊ธฐ์ข์ ์นด๋ฐ ํค๋ํ_๊ทธ๋ ์ด ๋ฐ๋ฅธ์ํ'</li><li>'๊ณต์
์ฉ ์ ํ๊ธฐ ์์ ๋ง ๋ํ ์ฐ์
๋ณดํธ๋ง ์ปค๋ฒ ์ฉํ ์นด๋ฐ 30in ๊ณ ํ๋ฆฌ'</li></ul> |
| 4.0 | <ul><li>'๋ฌดํ๊ณต ๋ฒฝ๋ถ์ฐฉ ํ์ฅ์ง ์๋ฉ ๋ค์ฉ๋ ๊ฑธ์ด ๋ฐ๊ตฌ๋ ์ํ ์ ๋ฐ๋ฐ ์ค์ผ ๊ฑฐ์น๋ ์ผ์ด์ค ์ธํ
๋ฆฌ์ด ๊ทธ๋ ์ด1P ์ด๋
ธํธ๋ฌ์คํธ'</li><li>'์คํ
์ฃผ๋ฐฉ ์ฑํฌ๋ ๋์ดํ๊ฑฐ ํํฌ ์๊ฑด ํ์ฃผ ๊ฑธ์ด ๋ฌธ๊ฑธ์ดํ๊ฑฐ ํ์ดํธ๋์ฅ_๋์ดํ๊ฑฐ1P ์ฃผ์ํ์ฌ ์ ์ด์ผ์ด์จ๊ธ๋ก๋ฒ'</li><li>'๋ก์๋ ์ด๊ฐ๋ ฅ ๋ง๊ทธ๋คํฑ ์์ ํํฌ ๋ง์คํฌ ๊ฑธ์ด 10P ๋ฏธ๋์์ํํฌ 10๊ฐ์ธํธ(E10) ๋ฐ์ผ๋ฆฌ๋ง์ผ'</li></ul> |
| 7.0 | <ul><li>'์ฌํ ํน๊ฑฐ์ปค๋ฒ ํน๊ฑฐ์ปค๋ฒ-๋ธ๋ผ์ด ๊ฐฑ์์ํ'</li><li>'๋จผ์ง๋ฐฉ์ง ํ๋ฆฌ๋ฏธ์ ์ท์ปค๋ฒ ํผํฉ(์์ผ์ฉ10P+์ฝํธ์ฉ5P) ์ด 15P GSSHOP_'</li><li>'์ธ์ณํ ์๋ฅ๋จผ์ง๋ฐฉ์ง๋ฎ๊ฐ ํ๊ฑฐ์ปค๋ฒ ์๋ฅ์ค์ผ๋ฐฉ์ง์ปค๋ฒ No 01.๋ฐํฌ๋ช
๋จผ์ง ๋ฐฉ์ง๋ฎ๊ฐ 60X110 ๋์์์ค'</li></ul> |
| 2.0 | <ul><li>'[๋น์นด] ํธ๋กค๋ฆฌ /์๋ฉํจ/์ธ๋ ์ด/๊ธฐ์ ๊ทํจ/์ด๋์์๋ฉํจ/์ฃผ๋ฐฉ์ฉํ/์๋น ๋น์นด_๋คํฌํฌ๋ ์ด '</li><li>'ํ ๋ฉ์ด๋ ์ฃผ๋ฐฉ ํ์ฅํ ์๋ ์ด๋์ ํธ๋กค๋ฆฌ ํ์ ์ ๋ฐ ๋ฐํด๋ฌ๋ฆฐ ์๋ฉ์ฅ ํธ๋ ์ด 3๋จ ๋ฉ์ฌ ํธ๋กค๋ฆฌ 4๋จ ํ์ดํธ (์ฃผ)ํ ๋ฉ์ด๋'</li><li>'์ํ
์ค ๋ค์ฉ๋ ๋ฒ ๋๋ค๋ฒฝ์ ๋ฐ ํฌํธ๋ฆฌ ์ธํ์ค ๋ฒฝ๊ฑธ์ด ๋ฐ๋ฌ ํ์ดํธ/๋ด์คํ
์ธ๋ ์ค/์ฌ์คํ
์ธ๋ ์ค ๊ตญ๋ด์ ์_MC ๋ฐ๋ฌํ์ดํธ 43cm_1๋จ 1.2m ๋ฉธ์น์ผํ'</li></ul> |
| 1.0 | <ul><li>'์ผํ ๋ฐ๊ตฌ๋ ๋งํธ ์ฅ๋ฐ๊ตฌ๋ ํ๋ผ์คํฑ ์์ฅ ํธ์์ AA ๋ฏธ๋ ์ด๋ก ์ผํ ๋ฐ๊ตฌ๋ ์๋ ์ด๋ก ์ฃผ์ํ์ฌ ์ค๋ฉ๊ฐํค์น'</li><li>'[๋ค์ด์]์นธ์นธ์ด์๋ฉ๋ฐ๊ตฌ๋-1001900 (์ฃผ)ํ์ฐ์ด์ผํ'</li><li>'๋๋๋ฌด ์์ ํ๊ณผ๋ฐ๊ตฌ๋ ๋๋๋ฌดํจ ํ๊ณผ์์ ํ๋ฐฑ ๋ก๋ฐ ํ์
ํ์
_์ฌ๊ฐ ์์ - ์ ์ด์ก์ค2'</li></ul> |
| 0.0 | <ul><li>'[์ ๊ตฌ์ ๊ตฌ]๊ฐ์์ง ์ ๊ฒฌ ๋ง๋คํน ์ธํ S/M/L 04_L ๋ธ๋์๋์ด ์ ์ธ๊ณ๋ชฐ'</li><li>'๋งค์ฅ์ฉ ๋ชฉ๊ฑธ์ด ๊ท๊ฑธ์ด ๊ฑฐ์น ๋ง๋คํน ๋์คํ๋ ์ด ์ฅฌ์ผ๋ฆฌ 14 ์์ํ'</li><li>'์์ฒด๋ง๋คํน ๋จ์ฑ์ ์ฅ ๋์คํ๋ ์ด ๋ง๋คํน ์๋ณต์ ์ ์ ๋๋ ์ค ์คํ ๋ ์๋ฅ๋งค์ฅ๋ชจ๋ธํ๊ฑฐ Q ์๋ ์ผํ'</li></ul> |
| 5.0 | <ul><li>'๋งจ์์ง๊ณต์์ญ์์ถํฉ ๋งจ์์์ถํฉ 45x70 ์ธ์ธ ํด๋ฌ์คํฐ'</li><li>'๋ํ ์
์ฒด ์์ถํฉ ์ด๋ถ ์ธํ ๋ณด๊ด ์ํ์ ํ_์คํธ์ธ๋กํ ํ๋ฐค'</li><li>'๋งจ์์ง๊ณต์์ญ์์ถํฉ ๋งจ์์์ถํฉ 56x80 ์ธ์ธ ํด๋ฌ์คํฐ'</li></ul> |
| 8.0 | <ul><li>'4๊ฐ ํ์ผ ๋ํ ์คํ ์ข
์ด์๋ฉ๋ฐ์ค ๋ฆฌ๋น ๋ํ ์บฃ๋ง์คํฐ'</li><li>'[ํ ๋ก์ฐ์ฆ] ๋ฑ
์ปค์ค ๋ฒ ์ด์ง / ํ๋ ์คํธ ํ์ผ๋ฐ์ค(2๊ฐ์
) ๋ฒ ์ด์ง ์คํผ์ค๋ํฌ ์์๊ตฐํฌ์ '</li><li>'ํ ๋ก์ฐ์ฆ ํ์ผ๋ฐ์ค Presto ๋ธ๋ฃจ 17243 2๊ฐ ํ๋ ์ค์จ์ปดํผ๋'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9374 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh14")
# Run inference
preds = model("๊ณ ๊ธ ํจ๋ธ๋ฆญ์ ๋ฆฌํจ ๋ฆฌ๋น๋ฐ์ค ํธ์์ปค๋ฒ ์ํ ๊ธฐ์79")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 10.5244 | 22 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0141 | 1 | 0.3887 | - |
| 0.7042 | 50 | 0.3275 | - |
| 1.4085 | 100 | 0.1223 | - |
| 2.1127 | 150 | 0.0307 | - |
| 2.8169 | 200 | 0.0273 | - |
| 3.5211 | 250 | 0.0253 | - |
| 4.2254 | 300 | 0.0097 | - |
| 4.9296 | 350 | 0.0156 | - |
| 5.6338 | 400 | 0.0156 | - |
| 6.3380 | 450 | 0.0175 | - |
| 7.0423 | 500 | 0.0136 | - |
| 7.7465 | 550 | 0.0117 | - |
| 8.4507 | 600 | 0.002 | - |
| 9.1549 | 650 | 0.0174 | - |
| 9.8592 | 700 | 0.0155 | - |
| 10.5634 | 750 | 0.0136 | - |
| 11.2676 | 800 | 0.0193 | - |
| 11.9718 | 850 | 0.0135 | - |
| 12.6761 | 900 | 0.0004 | - |
| 13.3803 | 950 | 0.0001 | - |
| 14.0845 | 1000 | 0.0001 | - |
| 14.7887 | 1050 | 0.0001 | - |
| 15.4930 | 1100 | 0.0 | - |
| 16.1972 | 1150 | 0.0 | - |
| 16.9014 | 1200 | 0.0 | - |
| 17.6056 | 1250 | 0.0 | - |
| 18.3099 | 1300 | 0.0 | - |
| 19.0141 | 1350 | 0.0 | - |
| 19.7183 | 1400 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sarmadsiddiqui29/Llama-3.2-3B-Instruct-Urdu-Stories
|
sarmadsiddiqui29
| 2024-11-20T16:14:26Z | 166 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T16:11:59Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sarmadsiddiqui29
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/AdamLucek_-_Orpo-Llama-3.2-1B-15k-awq
|
RichardErkhov
| 2024-11-20T16:13:08Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2403.07691",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T16:12:02Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Orpo-Llama-3.2-1B-15k - AWQ
- Model creator: https://huggingface.co/AdamLucek/
- Original model: https://huggingface.co/AdamLucek/Orpo-Llama-3.2-1B-15k/
Original model description:
---
library_name: transformers
license: mit
datasets:
- mlabonne/orpo-dpo-mix-40k
base_model:
- meta-llama/Llama-3.2-1B
pipeline_tag: text-generation
---
# Orpo-Llama-3.2-1B-15k
AdamLucek/Orpo-Llama-3.2-1B-15k is an [ORPO](https://arxiv.org/abs/2403.07691) fine tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on a subset of 15,000 shuffled entries of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
Trained for 7 hours on an L4 GPU with [this training script](https://colab.research.google.com/drive/1KV9AFAfhQCSjF8Ej4rI2ejDmx5AUnqHq?usp=sharing), modified from [Maxime Labonne's original guide](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html)
For full model details, refer to the base model page [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B)
## Evaluations
| Benchmark | Accuracy | Notes |
|-----------|----------|--------|
| AGIEval | 20.99% | Average across multiple reasoning tasks |
| GPT4ALL | 51.12% | Average across all categories |
| TruthfulQA | 42.80% | MC2 accuracy |
| BigBench | 31.75% | Average across 18 tasks |
| MMLU | 31.23% | Average across all categories |
| Winogrande | 61.33% | 5-shot evaluation |
| ARC Challenge | 35.92% | 25-shot evaluation |
| HellaSwag | 48.65% | 10-shot evaluation |
[Detailed Eval Metrics Available Here](https://docs.google.com/document/d/174SRz1pb9GIJ4kIOoMOEyN6ebz3PrEX-9rNnlcVOjyM/edit?usp=sharing)
## Using this Model
```python
from transformers import AutoTokenizer
import transformers
import torch
# Load Model and Pipeline
model = "AdamLucek/Orpo-Llama-3.2-1B-15k"
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model)
# Generate Message
messages = [{"role": "user", "content": "What is a language model?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training Statistics
<div style="display: grid; grid-template-columns: repeat(2, 1fr); gap: 5px; max-width: 1000px;">
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ba68a15d2ef0a4b2c892b4/p_GHj_vst0xnC7tBznwRk.png" alt="Panel 1" style="width: 100%; height: auto;">
</div>
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ba68a15d2ef0a4b2c892b4/AT6XO0WuHOWICT5omJ1L5.png" alt="Panel 2" style="width: 100%; height: auto;">
</div>
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ba68a15d2ef0a4b2c892b4/XOXtthQ1RWxzcIP6V8-o_.png" alt="Panel 3" style="width: 100%; height: auto;">
</div>
<div>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ba68a15d2ef0a4b2c892b4/WmV9BWOBxElAvZ3aClgUu.png" alt="Panel 4" style="width: 100%; height: auto;">
</div>
</div>
|
RichardErkhov/NucleusOrg_-_Nucleus-1B-alpha-1-awq
|
RichardErkhov
| 2024-11-20T16:12:46Z | 5 | 0 | null |
[
"safetensors",
"mistral",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T16:11:43Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nucleus-1B-alpha-1 - AWQ
- Model creator: https://huggingface.co/NucleusOrg/
- Original model: https://huggingface.co/NucleusOrg/Nucleus-1B-alpha-1/
Original model description:
---
license: mit
language:
- en
datasets:
- nampdn-ai/tiny-textbooks
---
# Nuclues 1B Alpha1
<p align="center">
<img src="https://github.com/prp-e/nucleus/raw/main/nucleus-logo.png" width=256 height=256>
</p>
## What is Nucleus?
Nucleus is a small language model based on Mistral (actually, the trimmed untrained version you can find [here](https://huggingface.co/lmlab/lmlab-mistral-1b-untrained)) and trained in different steps. First, we've pretrained it on TinyStories dataset, then [TinyTextBooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks) to make it a more specific model. This model is just a _proof of concept_ at this point, but showed good promises in early tests. So with proper training, can be a good product over time!
## Inference
[](https://colab.research.google.com/github/prp-e/nucleus/blob/main/nucleus_1b_inference.ipynb)
First you need to install `transformers` and `accelerate` libraries in order to run this model. Then, you basically have to run the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
model_name_or_id = "NucleusOrg/Nucleus-1B-alpha-1"
model = AutoModelForCausalLM.from_pretrained(model_name_or_id, torch_dtype=torch.float16, device_map="cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_id)
prompt = "### Lesson: Python Programming 101\n### Introduction\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.9,
max_new_tokens=500,
repetition_penalty=1.5,
pad_token_id=tokenizer.eos_token_id
)
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
__Prompt Format__: This model does not have a specific prompt format, but the best results could be achieved with a _textbook_ type of format like:
```
### Chapter 1: Elon Musk and Iron Man
Elon met Tony at a Cafe in Monaco, then they had a conversation about
```
You also can try something like this:
```
Question: Who are you?
Answer:
```
But since the model isn't made for chat/question answering, the result won't be good enough.
__Repetition Penalty__: Since most of these models like to repeat themselves, just keep that number there. You can increase or decrease it based on your liking,but keep in mind that a number lower than 1 makes the model _super repetitive_.
## Known Issues
* Since we only had 420k rows of data, a lot of information are missing on this model. Since mentioned earlier in this very model card, it's a _proof of concept_ model.
* You probably may test it with coding. Let's say that the model is terrible at coding. We may release a coding optimized model as soon as possible.
## Our Team
* Muhammadreza Haghiri ([X (formerly Twitter)](https://twitter.com/haghiri_ai) - [Website](https://haghiri75.com/en) - [Github](https://github.com/prp-e) - [LinkedIn](https://www.linkedin.com/in/muhammadreza-haghiri-1761325b))
* Mahi Mohrechi ([Website](https://mohrechi-portfolio.vercel.app/) - [Github](https://github.com/f-mohrechi) - [LinkedIn](https://www.linkedin.com/in/faeze-mohrechi/))
## Special Thanks
* LMLabs for providing 1B untrained model.
* Mistral Team for providing the best open source base model ever.
* _Sina Rashidi_, who translated Alpaca dataset to Persian.
* [Jupyto](https://jupyto.com) team for providing our infrastructure.
|
RichardErkhov/friendshipkim_-_Llama-3.2-1B-pruned-h0.5-i0.5-a0.0-awq
|
RichardErkhov
| 2024-11-20T16:10:12Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T16:09:49Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-pruned-h0.5-i0.5-a0.0 - AWQ
- Model creator: https://huggingface.co/friendshipkim/
- Original model: https://huggingface.co/friendshipkim/Llama-3.2-1B-pruned-h0.5-i0.5-a0.0/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DongfuJiang/prm_version3_subsample_no_ref_hf
|
DongfuJiang
| 2024-11-20T16:08:16Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T09:42:32Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: prm_version3_subsample_no_ref_hf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prm_version3_subsample_no_ref_hf
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the prm_conversations_prm_version3_math+webinstructsub-mcq+webinstructsub-oe+apps+gsm_subsample_hf dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2103 | 0.2253 | 500 | 0.2052 |
| 0.1811 | 0.4506 | 1000 | 0.1891 |
| 0.1809 | 0.6759 | 1500 | 0.1767 |
| 0.1841 | 0.9012 | 2000 | 0.1706 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
RichardErkhov/alpindale_-_Llama-3.2-1B-Instruct-awq
|
RichardErkhov
| 2024-11-20T16:04:32Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2204.05149",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T16:03:22Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-Instruct - AWQ
- Model creator: https://huggingface.co/alpindale/
- Original model: https://huggingface.co/alpindale/Llama-3.2-1B-Instruct/
Original model description:
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.2
extra_gated_prompt: >-
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
Llama 3.2 Version Release Date: September 25, 2024
โAgreementโ means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
โDocumentationโ means the specifications, manuals and documentation accompanying Llama 3.2
distributed by Meta at https://llama.meta.com/doc/overview.
โLicenseeโ or โyouโ means you, or your employer or any other person or entity (if you are
entering into this Agreement on such person or entityโs behalf), of the age required under
applicable laws, rules or regulations to provide legal consent and that has legal authority
to bind your employer or such other person or entity if you are entering in this Agreement
on their behalf.
โLlama 3.2โ means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://www.llama.com/llama-downloads.
โLlama Materialsโ means, collectively, Metaโs proprietary Llama 3.2 and Documentation (and
any portion thereof) made available under this Agreement.
โMetaโ or โweโ means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or Switzerland)
and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking โI Acceptโ below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Metaโs intellectual property or other rights
owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof),
or a product or service (including another AI model) that contains any of them, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display โBuilt with Llamaโ
on a related website, user interface, blogpost, about page, or product documentation. If you use the
Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include โLlamaโ
at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the
following attribution notice within a โNoticeโ text file distributed as a part of such copies:
โLlama 3.2 is licensed under the Llama 3.2 Community License, Copyright ยฉ Meta Platforms,
Inc. All Rights Reserved.โ
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
incorporated by reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licenseeโs affiliates,
is greater than 700 million monthly active users in the preceding calendar month, you must request
a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS THEREFROM ARE PROVIDED ON AN โAS ISโ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
set forth in this Section 5(a). Meta hereby grants you a license to use โLlamaโ (the โMarkโ) solely as required
to comply with the last sentence of Section 1.b.i. You will comply with Metaโs brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
will inure to the benefit of Meta.
b. Subject to Metaโs ownership of Llama Materials and derivatives made by or for Meta, with respect to any
derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
California without regard to choice of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
any dispute arising out of this Agreement.
### Llama 3.2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (โ**Policy**โ).
The most recent copy of this policy can be found at
[https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
#### Prohibited Uses
We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
1. Violate the law or othersโ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individualsโ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Metaย
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
9. Guns and illegal weapons (including weapon development)
10. Illegal drugs and regulated/controlled substances
11. Operation of critical infrastructure, transportation technologies, or heavy machinery
12. Self-harm or harm to others, including suicide, cutting, and eating disorders
13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
16. Generating, promoting, or further distributing spam
17. Impersonating another individual without consent, authorization, or legal right
18. Representing that the use of Llama 3.2 or outputs are human-generated
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagementย
4. Fail to appropriately disclose to end users any known dangers of your AI system
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
Please report any violation of this Policy, software โbug,โ or other problems that could lead to a violation of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: LlamaUseReport@meta.com
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## How to use
This repository contains two versions of Llama-3.2-1B-Instruct, for use with `transformers` and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct
```
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Total | 830k | 86k | | 240 | 0 |
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 63.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 40.1 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 19.0 | 17.2 |
| Instruction following | | IFEval | 0 | avg(prompt/instruction acc loose/strict) | 59.5 | 77.4 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 77.7 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 47.3 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 78.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 32.8 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 69.8 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 67.0 | 70.9 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 34.3 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | 19.8 | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | 63.3 | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | 84.7 | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 58.2 | 68.9 |
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro\_avg/acc) | Portuguese | 39.82 | 54.48 | 62.12 |
| | | Spanish | 41.5 | 55.1 | 62.5 |
| | | Italian | 39.8 | 53.8 | 61.6 |
| | | German | 39.2 | 53.3 | 60.6 |
| | | French | 40.5 | 54.6 | 62.3 |
| | | Hindi | 33.5 | 43.3 | 50.9 |
| | | Thai | 34.7 | 44.5 | 50.3 |
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Metaโs Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driverโs seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. Weโve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the modelโs capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2โs 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Metaโs Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
mini1013/master_cate_lh12
|
mini1013
| 2024-11-20T16:04:02Z | 332 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T16:03:40Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ์ด๊ธ๋ฃจ์บ S3ํ๋ฌ์ค 2K 300๋งํ์ ๊ฐ์ ์ฉ CCTV ํ ์นด๋ฉ๋ผ ํ์บ (์ฃผ) ํธ๋ฃจ์
- text: ๋ฉ์์ฝ ์กฐํธ๋น๋ 400g ๋ง๋ฅ์ธ์ ์ธํ ์ธ์ ๋นจ๋ ๊ธฐ๋ฆ๋ ์ผ๋ฃฉ์ ๊ฑฐ ์์ค์ฒญ์ ๋ผ์ฝ๋ก๋ ์กฐํธ๋น๋ 400g (๋ธ๋ฃจ) ๋ฆฌ์์ค๋ฆฌ๋ธ
- text: ํผํ์ฐ๊ตฌ์ ์๋ฉด๋ฐฉ์ ๋งคํธ ๋์๋ฆฌ 145x150cm ๋ก์ง๋ธ๋ผ์ด ํผํฌ๋ ๊ฐ์ฑ ํ์ดํธ_M ์คํธ๋ฆผํ๋ฌ๋
- text: ๋ค์ฐ๋ ์ธํธ๋ผ ์์ดํ๋ฆด ํ๋ ์ 5.03L [์ํ] ์ฌ์ ์ ์ฐ์ _ํผ์ฃค ํํฌ๋ก์ฆ 3.1L x 4๊ฐ ์๋ก์ฐ๋ก์ผ
- text: ์ฐฝ๋ฌธ ์๋ ๋กค๋ฐฉ์ถฉ๋ง ์ํ์ ๋ฏธ์ธ ๋ํ ์
ํ๊ต์ฒด ์ฌ๋ฉด ๊ฐ๋ก300x์ธ๋ก250mm ์ฌ๋ฉด_๊ฐ๋ก1600mm(1501~1600)_์ธ๋ก600mm(501~600)
NKํ
ํฌ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.7296620438939007
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 9.0 | <ul><li>'์ ๋ฐํ ์คํ
์ ๋ณด๋กค ๋์คํ์ ํฐ ๊ฑฐ์น ์ผ์ด์ค ์ ๊ด์ค๋ฒ 04_CNDH-03 ์คํ
์ ๋ณด ์ ๊ด ๊ณจ๋์ ๋ฐ (์ฃผ)์์์คํธ๋ ์ด๋'</li><li>'ํ์ด๋ธ๋ฆฌ๋ ์ฐ๋ฆฌํน ๋ฏธํธํ์ดํผ ํด๋์ง 2๋กค ์ ํ04.ํฌ๋ก์ค ์์๋ฉธ๊ท ํก์์ง 2๋กค ์ฐํ
๋ง์ผ'</li><li>'ํฌ๋ฆฌ๋ฅ์ค ํด๋ฆฐ์ผ์ด ์์ฟ ์ ๋ฉ๊ฐ๋กค 3๊ฒน 50m 30๋กค ํด๋ฆฐ์ํํธ 3๊ฒน ๋ฐ์ฝ 30m 30๋กค ๋ฉ๋ฆฌ์ค'</li></ul> |
| 2.0 | <ul><li>'์ ๊ฒฝ ๋ก์ ํ๋กํด๋ฆฌ์ค ์๋์
์ ๋ฌผ์ธํธ ์ถ์ ์ ๋ฌผ์ธํธ ๋ช
์ ์ ๋ฌผ์ธํธ ์ดํธ ๋ฆฐ์ค ๊ฐ์ฑ์'</li><li>'๋ก์ ํ๋กํด๋ฆฌ์ค ์
๋ ์
29ํธ X 1๊ฐ ๊ณ ๊ธฐ๋ฅ ์๋์
์ผ๋ก ํ๋ณต ์ ๋ฌผ MinSellAmount ํ๋๋ง๋ง'</li><li>'๋๋ฆผ๋ฐ์ค ์นด์นด์ค ๋ผ์ด์ธ ์ ๋ฌผ์ธํธ GIFT BOX (์ค์๊ธฐ+์ค์์ค+ํํฐ4P+์ผ์์ ๊ฑฐ๋ณผ1P) ์นด์นด์ค ์ ๋ฌผ์ธํธ GIFT BOX [๋ผ์ด์ธ] ๋ฐ์คํ
'</li></ul> |
| 0.0 | <ul><li>'์ด์ํ ์นด๋ฉ๋ผ CCTV ๋ฌด์ ๋ฏธ๋ ๊ฐ์ ์ด์ํ ์นด๋ฉ๋ผ + 128GB SD์นด๋_(๋ฆฌ๋ทฐ์ฝ์)SD์นด๋ 32GB+๋ฐฉ์์ผ์ด์ค+๊ฑฐ์น๋2์ข
์ผ๋ ๋'</li><li>'์ง๋ฅด์ฝ ๋ฉํฐ ํ์ง๊ธฐ HD70 ๋ฉํฐํ์๊ธฐ ZIRCON ๋ฉํฐํ์ง ํ๋์ค๋ฌ'</li><li>'๋ฉ๋ชจ๋ฆฌ์ ํ ํฐํผ๋งํฌ Tapo TC70 200๋งํ์ 360๋ํ์ ์ค๋ด๋ฌด์ ์นด๋ฉ๋ผ ํCCTV ์ผ๊ฐํ๋ฐฑ์ ํ ์ ํ4 Tapo TC70+๋ฉ๋ชจ๋ฆฌ์นด๋128G ์ผ์ฑ๋์ค์จ์ฃผ์ํ์ฌ'</li></ul> |
| 4.0 | <ul><li>'๋ฐ์ด์ค ๊ฑด์กฐ๊ธฐ ๋๋ผ์ด์ํธ ์์๋์ดํ๋ ์ฌ 160๋งค 1๊ฐ ์๋์ ํฑ'</li><li>'ํผ์ฃค ํํฌ๋ก์ฆ 3.1L ํผ์ฃค ๋น์์นด 3100ml 1์
์ฃผ์ํ์ฌ ๋๋ฆผ์ผํ'</li><li>'๋ค์ฐ๋ ์์คํผํธ ์ค๋ด๊ฑด์กฐ ์ฌ์ ์ ์ฐ์ 1L ์ํํฅ๊ธฐ ์ฝํผํจ์ด ์ฉ๊ธฐ 1L (์ฃผ)๋ชจ๋์ปดํผ๋'</li></ul> |
| 8.0 | <ul><li>'์์ ค๊ฐ๋ ํนํ 90๋ ํ์ ์ ๊ธฐ๋ชจ๊ธฐ์ฑ ์ถฉ์ ์์ ์ํ๋ฆฌ์ฑ ๊ฑด์ ์ง๋ 01. ํนํ๋ฐ์ ledํ์ ๋ชจ๊ธฐ์ฑ(์ถฉ์ ์ ๋) ํํฌ WOODํํฌ'</li><li>'ODF169432ํดํผํ ์์ด๋ท ๊ฑธ์ดํ ์ ์ด์ ์ฝ๋ฆฌ์(JL KOREA)'</li><li>'์ด๊ฐ๋ ฅํด์ถฉํฌ๋ฌ์ ๊ธฐ๋ชจ๊ธฐ์ฑ(ํน๋) ๋นํธํ
ํฌ๋
ธ'</li></ul> |
| 6.0 | <ul><li>'ํจ์ผ ํผ์ค ํ์์ ค ๋ผ๋ฒค๋ ๋๋ผ์ฉ ๋ฆฌํ 1.8L ํผ์ค ํ์์ ค ๋๋ผ์ฉ 1.8L(์ผ๋ฐ/๋๋ผ ๊ฒธ์ฉ) ๋๋ฆฌํ๋ฌ์ค'</li><li>'๋ค์ฐ๋ ํ๋ฆฌ๋ฏธ์ ์์คํผํธ ์ค๋ด๊ฑด์กฐ ์ธํ์ธ์ ์ก์ฒดํ 1.9L 08_๋ค์ฐ๋ ์ฝํผ ํจ์ด๋ฌ๋ธ 1L (์ฃผ)๋ฅ์คํธ์๋์ฝํผ๋ ์ด์
'</li><li>'์ ๊ฒฝ์ฐ์
์คํํฌ ์ฐฌ๋ฌผ์ ์๋
น๋ ์ธํ์ธ์ ๋ฆฌํ 9.5kg 1๊ฐ ์ผํน(SHOW KING)'</li></ul> |
| 3.0 | <ul><li>'์ผํ์นดํธ ๋ฐํด๋ฌ๋ฆฐ์ฅ๋ฐ๊ตฌ๋ ์์ฅ๋ฐ๊ตฌ๋์บ๋ฆฌ์ด ์ ์ด์ ์์๋ ํธ๋ ์นดํธ ๋งํธ ๋ฒ ์ด์ง์ฒดํฌํจํด (ํ์
07) ์ํ ํจํด 8์ข
_์ฒดํฌ ๊ณค์ ์์ด์ค๋์ค(A Orders)'</li><li>'์คํ
์ธ๋ ์ค ๊ฐ์ ์ฉ ์ํ ์ํ ์คํ์ด ์คํ ๋ ๋ฐฉ์ง ์ฌ 01.๋ฝ ๊ตฌํ ๋ธ๋ ๋ผ์ง ์์ด๋ฏธ์ด๋ธ'</li><li>'1์ด์์ฑ ์ํฐ์น๋ชจ๊ธฐ์ฅ ํ
ํธ ์นจ๋ ์ฌ๊ฐ ์๊ธฐ ๋ํ ์ฐฝ๋ฌธ 2_๋ฒ ์ด์ง ๋ธ๋ฃจ 2~3์ธ์ฉ(200X150) ๋ค์ต๋ชฐ'</li></ul> |
| 5.0 | <ul><li>'๊ธ๋น ๊ฒ๊ธฐ์ ๊ท ํ๋ฆฌ๋ฏธ์ ์์ด๋๋งค์ง ์ค์ํ ๋ํ 10p+10p(์ด 2ํฉ) ํฌํฐ๊ธฐ์ ๊ท ๋ํ 10p+10p ๋๋ฃจ(NARU)๋ฆฌํ
์ผ'</li><li>'๋ํ๋ ์คํ์ผ ์ธ๋์จ์ด ์ฌ๋ฆผ ๋ผ์ดํธํ ์คํ ์ฌ์ฑ์ฉ 10๊ฐ์
x8ํฉ/์์ค๊ธํฌํฐ ์ฑ์ธ๊ธฐ์ ๊ท ์ก๊ด๋ฌผ๋ฅ'</li><li>'์ ํํด๋ฒ๋ฆฌ ๋ํ๋ ์์ฌํ๋ฌ์ค ์คํ 9๋งค -1๊ฐ ์ฃผ์ํ์ฌ ๋ฏผ์'</li></ul> |
| 7.0 | <ul><li>'๋งํ ์ ๋ฐ ํ์ทจ์ 100ml ๋ฐ๋์ ์ ๋ฐ๋์ ์ ๊ฑฐ MinSellAmount ๋์ฝ์'</li><li>'์์ฆ์ฟจ ๋นจ๊ฐ์ ์ ๋ฐ๊ฑด์กฐํ์ทจ์ ๋์ ์ต๊ธฐ์ ๊ฑฐ MinSellAmount SMH๋ง๋ฌผ์ํ'</li><li>'ํ๋ธ๋ฆฌ์ฆ ํฌ๋งจ ์ฟจ์์ฟ ์ํฅ ๋ฆฌํ 320ml ํฌ๋งจ ์ฟจ์์ฟ ์ํฅ ๋ฆฌํ 320ml ์ง๊ธฐ์ต'</li></ul> |
| 1.0 | <ul><li>'์ข์๋๋ ์
๋ ์ค๋ฒ๋์ดํธ ์คํ 8๋งค x 1ํฉ ์ฃผ์ํ์ฌ ๋ค์ฌ์ฐ๊ตฌ์'</li><li>'๋์ค์ปต ๋ด๋ชธ์ ์๊ฐํ๋ ์์ ํ ์ค๋ฆฌ์ฝ ์๋ฆฌ์ปต ์ํ luckytiger3'</li><li>'ํ์ดํธ ์ํผํก์ ์คํ (30+6)๊ฐ์
(์ฃผ) ์ผ์ฑ ์์ด์น์์จ'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.7297 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh12")
# Run inference
preds = model("์ด๊ธ๋ฃจ์บ S3ํ๋ฌ์ค 2K 300๋งํ์ ๊ฐ์ ์ฉ CCTV ํ ์นด๋ฉ๋ผ ํ์บ (์ฃผ) ํธ๋ฃจ์")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.964 | 24 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0127 | 1 | 0.3941 | - |
| 0.6329 | 50 | 0.3041 | - |
| 1.2658 | 100 | 0.1323 | - |
| 1.8987 | 150 | 0.0705 | - |
| 2.5316 | 200 | 0.0185 | - |
| 3.1646 | 250 | 0.021 | - |
| 3.7975 | 300 | 0.0292 | - |
| 4.4304 | 350 | 0.0158 | - |
| 5.0633 | 400 | 0.0176 | - |
| 5.6962 | 450 | 0.0001 | - |
| 6.3291 | 500 | 0.0079 | - |
| 6.9620 | 550 | 0.0004 | - |
| 7.5949 | 600 | 0.0001 | - |
| 8.2278 | 650 | 0.0001 | - |
| 8.8608 | 700 | 0.0001 | - |
| 9.4937 | 750 | 0.0001 | - |
| 10.1266 | 800 | 0.0001 | - |
| 10.7595 | 850 | 0.0001 | - |
| 11.3924 | 900 | 0.0001 | - |
| 12.0253 | 950 | 0.0001 | - |
| 12.6582 | 1000 | 0.0 | - |
| 13.2911 | 1050 | 0.0 | - |
| 13.9241 | 1100 | 0.0001 | - |
| 14.5570 | 1150 | 0.0 | - |
| 15.1899 | 1200 | 0.0 | - |
| 15.8228 | 1250 | 0.0 | - |
| 16.4557 | 1300 | 0.0001 | - |
| 17.0886 | 1350 | 0.0 | - |
| 17.7215 | 1400 | 0.0 | - |
| 18.3544 | 1450 | 0.0 | - |
| 18.9873 | 1500 | 0.0 | - |
| 19.6203 | 1550 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
WK73/path-to-save-model
|
WK73
| 2024-11-20T16:03:11Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-11-20T15:22:36Z |
---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - WK73/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
RichardErkhov/andrijdavid_-_Llama3-1B-Base-awq
|
RichardErkhov
| 2024-11-20T15:59:48Z | 5 | 0 | null |
[
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T15:58:32Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3-1B-Base - AWQ
- Model creator: https://huggingface.co/andrijdavid/
- Original model: https://huggingface.co/andrijdavid/Llama3-1B-Base/
Original model description:
---
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
---
# Llama-3-1B-Base
Llama3-1b is a trimmed version of the official [Llama-3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) base model from [Meta](https://huggingface.co/meta-llama).
It has been reduced in size to ~1 billion parameters, making it more computationally efficient while still retaining a significant portion of the original model's capabilities.
This model is intended to serve as a base model and has not been further fine-tuned for any specific task.
It is specifically designed to bring the power of LLMs (Large Language Models) to environments with limited computational resources. This model offers a balance between performance and resource usage, serving as an efficient alternative for users who cannot leverage the larger, resource-intensive versions from Meta.
**Important**: This project is not affiliated with Meta.
## Uses
This model can be fine-tuned for a variety of natural language processing tasks, including:
- Text generation
- Question answering
- Sentiment analysis
- Translation
- Summarization
## Bias, Risks, and Limitations
While Llama3-1b is a powerful model, it is important to be aware of its limitations and potential biases.
As with any language model, this model may generate outputs that are factually incorrect or biased.
It is also possible that the model may produce offensive or inappropriate content.
Users and Developers should be aware of these risks and take appropriate measures to mitigate them.
## How to Use
To use Llama3-1b, you can load the model using the Hugging Face Transformers library in Python:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("andrijdavid/Llama-3-1B-Base/")
model = AutoModelForCausalLM.from_pretrained("andrijdavid/Llama-3-1B-Base/")
```
|
RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-medmcqa-awq
|
RichardErkhov
| 2024-11-20T15:59:12Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T15:57:54Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-1B-Instruct-medmcqa - AWQ
- Model creator: https://huggingface.co/ank028/
- Original model: https://huggingface.co/ank028/Llama-3.2-1B-Instruct-medmcqa/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mini1013/master_cate_lh11
|
mini1013
| 2024-11-20T15:55:19Z | 183 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T15:54:57Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ์นธํ ์ค ๋จ์ฌ ๊ธฐ๋ฅ์ฑ ๋ค์ด์ดํธ ์ง์์ฌ๋ฆฌํผ 5. ์ ์ฌ์ฑ์๊ฐ_240 ์จ๋๋ฆฌ์ฐ์
- text: ๊ตญ์ฐ ํค๋ผ์นธ ์ผ๋ํ ์ผ๋ํ ๋ฐ ๋ฐ๋ฐ๋ฅ ์ง์ ๊ฑด๊ฐ ์ฌ๋ฆฌํผ ์ค๋ดํ ์ฐํํฌ(M) ์์ผ๋ง์ผ
- text: ํ๋ฆฌ์ค ์ข
์๋ฆฌ๋ง์ฌ์ง๊ธฐ V3 ๋์์ดshop
- text: ํ ๋ธ๋ฌ์ฌ ๋ฐ๊ฐ์ง ์ ๊ฑฐ ๋ง์ฌ์ง ํ์ค๋ฉ
- text: ๋ง์ฌ์ง ์ค๋ด ๋ฐ์ง์๋งคํธ ๋์ง์ํ 50x200CM๋ณด๋ณด๋ณด์๊ฝ๋กฑ ๋์น๊ธ๋ก๋ฒ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9710123383380407
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 8 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6.0 | <ul><li>'์๊ฐ ๋งคํธ ์ง์ํ ์กฐ์ฝ๋ ๋ฐํ ์ง์๊ธธ ์ง์ ๋ฐ๋งคํธ 40X60CM ์ปฌ๋ฌํ ์์ด์'</li><li>'๋ฝ์ก๋ฝ์ก ๋ฉ๋ชจ๋ฆฌํผ ๋ฐ๋ฆ๊ฐ๋งคํธ ์์ค ์ฃผ๋ฐฉ ๋ฐ๋งคํธ ๋ฌ๊ทธ ๋ํ ๋ฉด ํ์ฅ์ค ๋ผ์ง 50X80_๋ ๋ ์๊ทธ๋๋ชฐ'</li><li>'๊ตณ์ ์ด ๊ฑฑ์ ์๋ ํนํ๊ธฐ์ ํฌ์คํ ๋ฐ ์ง์๋งคํธ ์ง์ํ ๋ถ๋ชจ๋ ์ ๋ฌผ ํจ๋ ํฌ์คํ ์ด๋ ๊ธฐํ์ํ_ํํฌ ๋ถ์ฝ๋ฆฌ์'</li></ul> |
| 1.0 | <ul><li>'๊น์์ ์์ ค ์ข
์๋ฆฌ ๋ฐ๋ง์ฌ์ง๊ธฐ ๋ค๋ฆฌ์๋ง๊ธฐ GKM-1004 ๋ฅํฐPLUS'</li><li>'๋ํ๋ ์ค ์๋ผ์จ์ด ์จ์ด์ฐ์ง ๋ฐ๋ง์ฌ์ง๊ธฐ DP-FM700 ์ ์ธ์ '</li><li>'7๋ง์๋ ์ถ๊ฐํ ์ธ ์ฌ๋ฆํด๊ฐํ์ [ ]์
๊ทธ๋ ์ด๋ 3์ธ๋ ์ค๋งํธ์ผ์ ์ข
์๋ฆฌ๋ถํฐ ํ๋ฒ
์ง๊น์ง ๋ถ๊ธฐ ์น!! ๋ฌด์ ๋ค๋ฆฌ๋ง์ฌ์ง๊ธฐ SR-S1+์ฌ๋ฆฌ๋ฐ์ญ์ค(์๋ก์ฐ) ์๋ จ๋ท์ปด'</li></ul> |
| 5.0 | <ul><li>'๋ฐ๊ฐ์ง์๋ง ์ค๋ฆฌ์ฝ ํจ๋ ๋ฐ๋ณด์ต ์๋ง ๋ฐ๊ฐ์ง ์ผ์ด ํ์ดํธ ์์ง์ผ์ด ์ฃผ์ํ์ฌ'</li><li>'[BZJKWP4I_49]irbrush ๋ค๊ฟ์น ํจ๋ ํ์ผ์ด ๋ฐ๊ฐ์ง 3.๋ธ๋(5mm)FREE ๋กฏ๋ฐ์์ด๋ชฐ'</li><li>'์ผ์๊ณต๊ฐ ๋ณด๋๋์กฑ ๋ฐ๋ณดํธ๋ 1+1 ๋ค๊ฟ์น ๋ฐ๊ฐ์ง ๋ณด์ตํํจ๋ ์๋ง ๋ฐ๋ณดํธ๋ 1+1_์คํจ L 1์+ํ์ดํธ M 1์ ์ฃผ์ํ์ฌ ์ด๊ณต๊ตฌ์ค'</li></ul> |
| 4.0 | <ul><li>'์พ๋ฐQ 60๋งค ๋ฐ๋์์ ๊ฑฐ์ /์ ํด์ธ๊ท ์ต์ /๋ฌด์ข/๋ฐ๊ด๋ฆฌ ํดํผMART'</li><li>'ํธ๋ฐฑ๋๋ฌด ์์ฆํ๋ ์ฌ ์ ๋ฐ์ฅ ์ท์ฅ ํ์ทจ ๋ฐ๋์์ ๊ฑฐ ์ ์ต ๋ฐฉํฅํจ๊ณผ ํฌ๋ ๋น์ค'</li><li>'[๊ณต์์์
] ๋ฐ๋์์ ๊ฑฐ์ ๊ทธ๋์ฆ๋ ๋ฏธ๋ ํํผ๋ฏผํธํฅ cscosmetics'</li></ul> |
| 0.0 | <ul><li>'ํ๋ฐ ์์น ์ฌ๋ฆฌํผ ๋ฐ๋ฐ๋ฅ ํต์ฆ ์ํ ์์น์ฌ๋ฆฌํผ ์ฌํ์ด์ด ๋ธ๋ฃจ_290 ํฑํ'</li><li>'ํต๊ตฝ ์ง์์ฌ๋ฆฌํผ ์ค๋ดํ ์ธต๊ฐ์์๋ฐฉ์ง ๋ฏธ๋๋ผ๋ฐฉ์ง ์์คํ ๋ํจ ์ง์ ์๋ก์ฐ 39-40 9025 ํ์ธ๋ฉํ๋ฆญ'</li><li>'[๋ซ์]๋ซ์ ์ง์2 ์ฌ๋ฆฌํผ ์์ด๋ณด๋ฆฌ/230 ํจ์
ํ๋ฌ์ค'</li></ul> |
| 3.0 | <ul><li>'๋ฐ๋ ๋งค์ง ์คํ
์๋ฉด ํ๋ก ๋ฐ๊ฐ์ง์ ๊ฑฐ๊ธฐ ํฌ์ ์ด๋ทฐํฐ'</li><li>'๋ฐ๋ ํจ๋ํผํํธ ์ ๋ ๋ฐ๊ฐ์ง ์ ๊ฑฐ๊ธฐ ํผํ์๋์
๋ฐ ๋ค๊ฟ์น ๋ฐ๋ฐ๋ฅ ๊ตณ์์ด ์ ๊ฑฐ ํจ๋ํ๋๋ 1์ธํธ(์ฌ์ํ ์ฆ์ )_๋ฆฌ๋ทฐ์ฝ์ x (์ฃผ)๋ง๋ฅด์ค๋ฉ์ค'</li><li>'์ค์ ๋ ์คํฌํ ๋ฒํผ ๋จํ ์ธ์ค์ธ๋ง์ผ'</li></ul> |
| 7.0 | <ul><li>'[๋ฆฌํผ๋ธ]๊ตฟํ๋ ๋ ๋ฐธ๋ฐ์คํด ๊ฑด์์ขํ์กฑ์๊ธฐ GOOD-F5 ๋ฆฌํผ๋ธ ๊ฑด์์ขํ์กฑ์๊ธฐ GOOD-F5 ์ฃผ์ํ์ฌ ๊ตฟํ
ํฌ'</li><li>'GSN-1610 ํธ๋ฐฑ๋๋ฌด ์์ ์ธ์ ๊ฑด์ ์ขํ๊ธฐ+์กฑ์๊ธฐ ๊ฒธ์ฉ MinSellAmount ์จ์ ์ดํ'</li><li>'B ๊ตฟํ๋ ๋ ์บ๋๋ค์ฐ ์๋๋ฌด ์๋ชฉ ์ค๋งํธ ๊ฑด์์กฑ์๊ธฐ GOOD-F4 ํด๊ฒ์ค ๊ฐ์ ์ฉ ๋ง์ผ๋ทฐ'</li></ul> |
| 2.0 | <ul><li>'์ผ์๋๊ตฌ์ธํธ ์
ํ ํคํธ ๋ณผ ๋ธ๋ฌ์ฌ ๊ฐ์ ์ฉ ํค์ด ๋ธ๋ฌ์ ๋น ๋จธ๋ฆฌ ๋ชจ๋ฐ ๊ฐํธ 07.Aeib ์ผ์๋น์ธํธ3P_๋ณธ์ํ์ ํ ์ฃผ์ํ์ฌ์ ๋ง์ผ'</li><li>'OC1242 ์๊ฐ๋ฝ ๋ฐ๊ฐ๋ฝ ๊ด์ ๋ณดํธ ๋ณด์ต ์ค๋ฆฌ์ฝ ๊ณจ๋ฌด18์ข
ํต๊ธฐํ์ดํธS(12425) ํ
์ต๋์ค(TAKE THIS)'</li><li>'OC1242 ์๊ฐ๋ฝ ๋ฐ๊ฐ๋ฝ ๋ณดํธ ๋ณด์ต ์ค๋ฆฌ์ฝ ๊ณจ๋ฌด18์ข
์ค๋ฆฌ์ฝ๊ณจ๋ฌด ๊ตฌ๋ฉ๋ซ๋ฆฐ๊ณจ๋ฌด ๋ฐ๊ฐ๋ฝ ๋ณดํธ ๊ณจ๋ฌด์คํจํคL(11033) ์ ์ดํ ์ฃผ์ํ์ฌ'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9710 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh11")
# Run inference
preds = model("ํ ๋ธ๋ฌ์ฌ ๋ฐ๊ฐ์ง ์ ๊ฑฐ ๋ง์ฌ์ง ํ์ค๋ฉ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.9325 | 21 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0159 | 1 | 0.4383 | - |
| 0.7937 | 50 | 0.2003 | - |
| 1.5873 | 100 | 0.0636 | - |
| 2.3810 | 150 | 0.0158 | - |
| 3.1746 | 200 | 0.0239 | - |
| 3.9683 | 250 | 0.0153 | - |
| 4.7619 | 300 | 0.0004 | - |
| 5.5556 | 350 | 0.0023 | - |
| 6.3492 | 400 | 0.0005 | - |
| 7.1429 | 450 | 0.0002 | - |
| 7.9365 | 500 | 0.0001 | - |
| 8.7302 | 550 | 0.0001 | - |
| 9.5238 | 600 | 0.0001 | - |
| 10.3175 | 650 | 0.0001 | - |
| 11.1111 | 700 | 0.0001 | - |
| 11.9048 | 750 | 0.0001 | - |
| 12.6984 | 800 | 0.0 | - |
| 13.4921 | 850 | 0.0001 | - |
| 14.2857 | 900 | 0.0001 | - |
| 15.0794 | 950 | 0.0001 | - |
| 15.8730 | 1000 | 0.0 | - |
| 16.6667 | 1050 | 0.0001 | - |
| 17.4603 | 1100 | 0.0 | - |
| 18.2540 | 1150 | 0.0001 | - |
| 19.0476 | 1200 | 0.0001 | - |
| 19.8413 | 1250 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/pavan01729_-_my_Llama-3.2-1B-Instruct-awq
|
RichardErkhov
| 2024-11-20T15:50:02Z | 5 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"awq",
"region:us"
] | null | 2024-11-20T15:49:23Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
my_Llama-3.2-1B-Instruct - AWQ
- Model creator: https://huggingface.co/pavan01729/
- Original model: https://huggingface.co/pavan01729/my_Llama-3.2-1B-Instruct/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LLM-EDA/VeriSeek
|
LLM-EDA
| 2024-11-20T15:47:12Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-19T12:37:26Z |
# Veriseek
## Table of Contents
- [Update Log](#update-log)
- [Project Description](#project-description)
- [Installation](#installation)
- [Usage](#usage)
- [Generation](#Generation)
- [Pretrain](#Pretrain)
- [PPO](#PPO)
- [Citation](#Citation)
- [Acknowledgement](#Acknowledgement)
## Update Log
- 2024.08.02: Add citation information.
- 2024.07.22: Initial version of Veriseek. Add training code and config file.
## Project Description
This study introduces a novel approach utilizing reinforcement learning with golden code feedback to enhance the performance of pre-trained models. Leveraging open-source data and base models, we have achieved state- of-the-art (SOTA) results with a substantial margin. Notably, our 6.7B parameter model VeriSeek demonstrates superior performance compared to current best-in- class 13B and 16B models. Furthermore, through a comprehensive analysis of the limitations in direct fine-tuning and the training dynamics of reinforcement learning, we posit that the development of comprehensive supervisory signals, which are align with the inherent parallel semantics of Verilog code, is critical to effective generation.
## Installation
To install this project, follow these steps:
1. Clone the repository: `git clone https://github.com/CatIIIIIIII/veriseek.git`
2. Navigate to the project directory: `cd veriseek`
3. Install the dependencies: `conda env create -f environment.yml`
## Usage
Before training, you should configure the `src/scripts/single_node.sh` file for training, `src/configs/accelerate/accel.yaml` for accelerate, `src/configs/deepspeed/ds_z3_config.json` for deepspeed. The configuration file contains the following options:
### Generation
To generate reponse, run the following demo:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
def stop_at_stop_token(decoded_string, stop_tokens):
"""
Produces the prefix of decoded_string that ends at the first occurrence of
a stop_token.
WARNING: the decoded_string *must not* include the prompt, which may have stop tokens
itself.
"""
min_stop_index = len(decoded_string)
for stop_token in stop_tokens:
stop_index = decoded_string.find(stop_token)
if stop_index != -1 and stop_index < min_stop_index:
min_stop_index = stop_index
return decoded_string[:min_stop_index]
prompt = """
Please act as a professional verilog designer.
Implement a data width conversion circuit that converts 8-bit data input to 16-bit data output. The module provides two output ports: valid_out, which indicates the validity of the output data, and data_out, which represents the converted 16-bit output data. The first arriving 8-bit data should be placed in the higher 8 bits of the 16-bit data output. The valid_out and data_out signals are generated in the next clock cycle after the two data inputs. When there is only one data input, valid_out and data_out are not generated immediately. Instead, they wait for the arrival of the next data input to complete the concatenation of the two data inputs before generating valid_out and data_out.
Module name:
width_8to16
Input ports:
clk: Clock signal used for synchronous operation.
rst_n: Active-low reset signal. Defined as 0 for reset and 1 for reset signal inactive.
valid_in: Input signal indicating the validity of the input data.
data_in: 8-bit input data to be converted.
Output ports:
valid_out: Output signal indicating the validity of the output data.
data_out: 16-bit output data resulting from the width conversion.
Implementation:
The data_out register is triggered on the positive edge of the clock signal (posedge clk) or the negative edge of the reset signal (negedge rst_n).
On reset, indicated by !rst_n, the data_out register is cleared to 0.
If the input data is valid (valid_in) and the flag signal is active, the data_out register is updated by concatenating the contents of the data_lock register (8 bits) and the data_in register (8 bits) to form a 16-bit output. The first valid data is temporarily stored, and when the second valid data is inputted, they are concatenated to produce the output valid_out and data_out.
Give me the complete code.
"""
gpu_id = 0
model_name = "LLM-EDA/VeriSeek"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map=gpu_id)
model.eval()
# Sample
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(gpu_id)
sample = model.generate(input_ids, max_length=1024, temperature=0.2, top_p=0.95, do_sample=True)
output = tokenizer.decode(sample[0])
stop_word = "endmodule"
output_trunc = stop_at_stop_token(output, [stop_word])
output_trunc += stop_word
print(output_trunc)
```
### Pretrain
The pretraining data is available at https://huggingface.co/datasets/LLM-EDA/vgen_cpp. Download and put it in the `data/pretrain` directory.
To pretrain the model, run the following command:
```bash
bash src/scripts/single_node.sh src/configs/pretrain/deepseek-7B.yaml
```
### PPO
The reinforcement learning algorithm used in this project is Proximal Policy Optimization (PPO). Opencores data is available at https://huggingface.co/datasets/LLM-EDA/opencores. Download and put it in the `data/opencores` directory. To train the model with PPO, run the following command:
```bash
bash src/scripts/single_node.sh src/configs/preference/deepseek-7B_ppo_ast.yaml
```
## Citation
## Acknowledgement
This repo benefits from [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Thanks for their wonderful works.
|
Legalaz/21_llambo1_10_37
|
Legalaz
| 2024-11-20T15:40:15Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T15:38:13Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9368
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
tokoin/Ministral-8B-Instruct-2410-Q4_0-GGUF
|
tokoin
| 2024-11-20T15:38:31Z | 8 | 0 |
vllm
|
[
"vllm",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:quantized:mistralai/Ministral-8B-Instruct-2410",
"license:other",
"region:us",
"conversational"
] | null | 2024-11-20T15:38:07Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French sociรฉtรฉ par actions simplifiรฉe
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
library_name: vllm
base_model: mistralai/Ministral-8B-Instruct-2410
tags:
- llama-cpp
- gguf-my-repo
---
# tokoin/Ministral-8B-Instruct-2410-Q4_0-GGUF
This model was converted to GGUF format from [`mistralai/Ministral-8B-Instruct-2410`](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tokoin/Ministral-8B-Instruct-2410-Q4_0-GGUF --hf-file ministral-8b-instruct-2410-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tokoin/Ministral-8B-Instruct-2410-Q4_0-GGUF --hf-file ministral-8b-instruct-2410-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tokoin/Ministral-8B-Instruct-2410-Q4_0-GGUF --hf-file ministral-8b-instruct-2410-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tokoin/Ministral-8B-Instruct-2410-Q4_0-GGUF --hf-file ministral-8b-instruct-2410-q4_0.gguf -c 2048
```
|
mradermacher/czech-gemma-2-9b-it-i1-GGUF
|
mradermacher
| 2024-11-20T15:32:29Z | 599 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:lodrick-the-lafted/czech-gemma-2-9b-it",
"base_model:quantized:lodrick-the-lafted/czech-gemma-2-9b-it",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-20T11:17:12Z |
---
base_model: lodrick-the-lafted/czech-gemma-2-9b-it
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lodrick-the-lafted/czech-gemma-2-9b-it
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/czech-gemma-2-9b-it-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/czech-gemma-2-9b-it-i1-GGUF/resolve/main/czech-gemma-2-9b-it.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mini1013/master_cate_lh9
|
mini1013
| 2024-11-20T15:29:00Z | 132 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T15:28:38Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ์ฝ์ด์ฌ๋ฆฌ๋จธ ์ ์ฉ ๋ฆฌํํจ๋ 6P 2๊ฐ ๋กฏ๋ฐ์์ด๋ชฐ
- text: ๋ฐ๋ฝ ์๋ชฉ ๋ง์ฌ์ง๊ธฐ ์๋ง๊ธฐ ๊ฐํธํ EMS ๋ฐ์๊ตฌ์ ํต์ฆ ํ๋ชฉ ๋ง์ฌ์ง ๋ฐ๋ฝ ์๋ชฉ ๋ง์ฌ์ง๊ธฐ ์ธํธ (์ฃผ)์๊ฐ๋
- text: '[๋ฐ์ด์คํ๋กํ
ํฌ]ํ๋กํ
์ค ํํ์
๋ํ ์ ์ฃผํํจ๋ 2์กฐ(RG01) '
- text: ์ฑ๊ฒ ํฑํฑ๋ณผ ๋
ธ์ธ๋ณต์ง์ผํฐ ์๋ง๋ณผ ์ด๊ฐ๋ณผ ์ ๋ฌผ ๋ชธ์ ๋ฌผ๋ฆฌ์น๋ฃ ์ด๋ฅด์ ํฑ๋ณผ_11.ํฑ๋ณผ(๋๊ตฌ) ์์ปค์ค
- text: '[์ฝ์๋๋ฆผ] ์ ์ฃผํ EMS ์ด๊นจ ๋ง์ฌ์ง๊ธฐ ๋ฏธ์ธ์ ๋ฅ ํด๋์ฉ ์๋ง๊ธฐ ๋ถ๋ชจ๋์ ๋ฌผ ํจ๋์ ๋ฌผ ์ด๊นจ๋ณดํธ๋ M(95~100ํธ) ๊ธ์๋ฆฌํ
์ผ ์ฃผ์ํ์ฌ'
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.894511760513186
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6.0 | <ul><li>'์๋ฆผ์ ์ ์ ์ธ์ ์กฐ์ฌ๊ธฐ ์ ์ฒดํ์ดํธ ํ๋ฆฝ์ค 250W ๋จํ ์ ์ธ์ ์น๋ฃ๊ธฐ ์๋ ๊ตญ์ฐ ์๋ฃ๊ธฐ๊ธฐ 01 ์ ์ฒดํ์ดํธ e์ฒญ์ถ'</li><li>'๋นํ๊ทธ๋จ ํ๋ฆฝ์ค ์ ์ธ์ ๋จํ ํผ๋ถ๋ฐฉ์ฌ๊ธฐ WGT-8888S VitaGRAM'</li><li>'์์ ์ธ์ ์จ์ด ์น๋ฃ๊ธฐ ํ์์ ์ด๊นจ ๊ฒฝ์ถ ๋จํ ๋ง์ฌ์ง MinSellAmount ์ฐจ๋ฅ์'</li></ul> |
| 2.0 | <ul><li>'HWATO ๊ณ ๊ธํ ๋ถํญ๊ธฐ 14์ปต ๋ผ์ดํ์ต'</li><li>'์ ์ฌํ๋ถํญ์ฉ ๋ฐ์ฃผ๊ธฐ ์ ํต์ฌํ๊ธฐ ๊ด๋ช
์ฌ ์นจ ๊ตฌ๋น ์ธก์ ์ต์ ์๋ฐ์ฃผ๋ ์ฒดํ์๋ ํ๋น๊ธฐ ์๋ ๊ฐํธ ์๋ฆฌ๋ชฝ๋ํฌ'</li><li>'ํ์๋ถํญ๊ธฐ ์ ํ ์๋
๊ฐ๋ฅ ๋ถํญ์ปต 10๊ฐ 1๋ฐ์ค (์ฌ์ด์ฆ์ ํ1-5ํธ) ํ์๋ถํญ2ํธ์ปต ์์๋ฃ๊ธฐ'</li></ul> |
| 5.0 | <ul><li>'์ค๋ฏ๋ก ์ ์ฃผํ ๋กฑ ๋ผ์ดํ ํจ๋ 2p HV-LLPAD-G... 1๊ฐ HV-LLPAD-GY ร 2๊ฐ ์ค์์'</li><li>'์ฝ์ด์ฌ๋ฆฌ๋จธ ์ ์ฉ ๋ฆฌํํจ๋ 6P 3๊ฐ [0001]๊ธฐ๋ณธ์ํ CJONSTYLE'</li><li>'ํด๋ญ ๋ฏธ๋ ๋ง์ฌ์ง๊ธฐ ๋ฆฌํํจ๋ ํฐํจ๋ 2๋ฐ์ค ์ด6P /DY_MC ๋ฉธ์น์ผํ'</li></ul> |
| 0.0 | <ul><li>'๋ฅํฐ์ฒดํฌ ์ฌ๋ฆผ Xํ ํ
์ดํ ๋ฌด๋ฆ๋ณดํธ๋(์ข์ฐ๊ฒธ์ฉ 1P) M-์คํ(630475) ํธ๋์ด๋ ์จ์คํ
(TRADE SYSTEM)'</li><li>'๋ฅํฐ์ฒดํฌ ์ฌ๋ฆผ Xํ ํ
์ดํ ์ข
์๋ฆฌ์๋ฐ๋ณดํธ๋(์ข์ฐ๊ฒธ์ฉ 1P) M-์คํ(630499) ํ๋นID'</li><li>'๊ตญ์ฐ ์๋ฃ์ฉ ํ๋ฆฌ๋ณดํธ๋ ํธ์ํ๊ณ ๋ถ๋๋ฌ์ด ํ๋ฆฌ๋ณต๋ ์ ํ01- 001s ํ๋ฆฌ๋ณดํธ๋_XXXL(40~43์ธ์น) ๋ํ๊ฑด๊ฐ'</li></ul> |
| 4.0 | <ul><li>'์คํธ๋ผํ
์๋ฃ์ฉ ์ ์นจ๊ธฐ 4์ฑ๋ STN-220 ์ ์ฃผํ์๊ทน๊ธฐ ์นจ์ ๊ธฐ์๊ทน๊ธฐ ์์ํ (์ฃผ)์คํ๋ฉ๋์นผ'</li><li>'๋์ฐ ์ ์ฃผํ ๋ง์ฌ์ง๊ธฐ ๋ฒํผํ LB-1803 ๋ฏธ๋๋ง์ฌ์ง๊ธฐ ํด๋์ฉ ๋ฌด์ ์๋ง๊ธฐ ์ผ๋ฐ๊ตฌ๋งค_06.๋ฒํผํ2๋ฐ์ค+๋ํํจ๋ 8๋งค+ํก์ฐฉ์ปต8๊ฐ ์ฃผ์ํ์ฌ ์ฒญํ'</li><li>'๊ทน๋์ ์ฃผํ PRO1000 wave GOLD ํฌ์คํธ๋๋ฉ๋์นผ'</li></ul> |
| 1.0 | <ul><li>'์กฐ์ํ ์ด์ํ์ ค ์๋ฃ์ฉ์ ค ํฌ๋ช
5L 1ํต ๋ฌด๋ฃ์ ๋ฌ ์กฐ์์ด์ํ์ ค5L๋ธ๋ฃจ ์ธ๋ธ๋ฉ๋์ปฌ'</li><li>'์ด๋ํ ์๋
ธ์ ค๋ฆฌ ํฌ๋ช
๋ธ๋ฃจ 5L +250ml ๊ณต๋ณ ์๋
ธ๊ฒ ์ด์ํ์ ค๋ฆฌ ECG [0001]๋ธ๋ฃจ 5L CJONSTYLE'</li><li>'์ธ๋ํผ์ ์์ฝ์๋ ์ด์ํ์ ค ํฌ๋ช
250mL 12๊ฐx4ํต 1๋ฐ์ค ์๋
ธ์ ค๋ฆฌ ํผ๋ถ๊ณผ ์ฐ๋ถ์ธ๊ณผ์ฉ ์ธ๋ธ๋ฉ๋์ปฌ'</li></ul> |
| 3.0 | <ul><li>'ํด๋ญ ๋ฏธ๋ ๋ง์ฌ์ง๊ธฐSE YGGlobal'</li><li>'์จ์ด/๊ณต๊ธฐ์/์์ ์ธ์ /์ ์ฃผํ 4์ค์ผ์ด ๋ฌด๋ฆ๋ง์ฌ์ง๊ธฐ[๊ณต๊ธฐ์ ์จ์ด ์์ ์ธ์ ์ง๋๊ธฐ๋ฅ]์๋ง๊ธฐ 05.ํด๋ ๋ฒ ๋ง์ฌ์ง๊ฑด SR825 ์๋ จ๋ท์ปด'</li><li>'ํดํ
ํฌ ํ์ฒด ๊ทผ์ก ๊ฐํ EMS ๋ง์ฌ์ง๊ธฐ ์์คํจ๋ ํํ2 HT-W03A '</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.8945 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh9")
# Run inference
preds = model("์ฝ์ด์ฌ๋ฆฌ๋จธ ์ ์ฉ ๋ฆฌํํจ๋ 6P 2๊ฐ ๋กฏ๋ฐ์์ด๋ชฐ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.78 | 21 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0182 | 1 | 0.4065 | - |
| 0.9091 | 50 | 0.2829 | - |
| 1.8182 | 100 | 0.0954 | - |
| 2.7273 | 150 | 0.0196 | - |
| 3.6364 | 200 | 0.0057 | - |
| 4.5455 | 250 | 0.0069 | - |
| 5.4545 | 300 | 0.0024 | - |
| 6.3636 | 350 | 0.0003 | - |
| 7.2727 | 400 | 0.0002 | - |
| 8.1818 | 450 | 0.0001 | - |
| 9.0909 | 500 | 0.0001 | - |
| 10.0 | 550 | 0.0001 | - |
| 10.9091 | 600 | 0.0001 | - |
| 11.8182 | 650 | 0.0001 | - |
| 12.7273 | 700 | 0.0001 | - |
| 13.6364 | 750 | 0.0001 | - |
| 14.5455 | 800 | 0.0001 | - |
| 15.4545 | 850 | 0.0001 | - |
| 16.3636 | 900 | 0.0001 | - |
| 17.2727 | 950 | 0.0001 | - |
| 18.1818 | 1000 | 0.0001 | - |
| 19.0909 | 1050 | 0.0 | - |
| 20.0 | 1100 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mav23/pythia-1b-deduped-GGUF
|
mav23
| 2024-11-20T15:26:48Z | 74 | 0 | null |
[
"gguf",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T15:16:59Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | โ |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | โ |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | โ |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. โEquivalentโ
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better โfollowโ human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most โaccurateโ text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA โ OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning ChallengeโEasy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1ร their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
PrunaAI/robert-admin1707-falcon7b-Social-Eng-qa-v3-bnb-8bit-smashed
|
PrunaAI
| 2024-11-20T15:26:33Z | 5 | 0 | null |
[
"safetensors",
"falcon",
"pruna-ai",
"custom_code",
"base_model:robert-admin1707/falcon7b-Social-Eng-qa-v3",
"base_model:quantized:robert-admin1707/falcon7b-Social-Eng-qa-v3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-20T15:17:08Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: robert-admin1707/falcon7b-Social-Eng-qa-v3
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo robert-admin1707/falcon7b-Social-Eng-qa-v3 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/robert-admin1707-falcon7b-Social-Eng-qa-v3-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("robert-admin1707/falcon7b-Social-Eng-qa-v3")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model robert-admin1707/falcon7b-Social-Eng-qa-v3 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
mini1013/master_cate_lh8
|
mini1013
| 2024-11-20T15:24:36Z | 255 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T15:24:13Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ํ๊ณตํ๋ท์ปด ์ธ๊ณ์ง๋ ๋ํ ์๋๋งต ์ธ๊ณ์ง๋03_600x900 (์ฃผ)์ค๋น
- text: ์คํ๋ง ์ ๋ณธ PDF ํ๋ฐฑ ๊ณ ํ์ง ๋ ์ด์ ์ถ๋ ฅ - ํ๋ฐฑ ์๋ฉด์ธ์ ๋ชจ์กฐ์ง80g 50p ์คํ๋ง ํ๋ฐฑ์๋ฉดโโ_๋ชจ์กฐ์ง100g_167~170 page
๋์์ถํ ๋ฒํ
- text: '[๋ฌํ์ดํผ] ๋ฌํ์ดํผ ๋ฏธ๋๋ฏธ๋ 6์ข
์ฝ์ postcard ์ธํ
๋ฆฌ์ด์ฝ์ 6 ๋ฏธ๋๋ฏธ๋ ์ผํ๋ ์ฃผ์ํ์ฌ ์ฒ์ ๋ท์ปด'
- text: ํ๊ฐ ํ์๋ง ํ๊ฐ ์์ ์์น ํ๋์นด๋ C00 ๋ค์ ์ํ100x70cm C22 ์ผ์ค(๋จ์)-์์ ๋ฌธ๊ตฌํฌํ ํ_์ํ 100x70cm (์ฃผ)์๋น์จ์ผ์ธํฐ๋ ํฐ๋ธ
- text: ์คํ๋ง ์ ๋ณธ PDF ํ๋ฐฑ ๊ณ ํ์ง ๋ ์ด์ ์ถ๋ ฅ - ํ๋ฐฑ ์๋ฉด์ธ์ ๋ชจ์กฐ์ง80g 50p ์คํ๋ง ํ๋ฐฑ๋จ๋ฉดโ_๋ชจ์กฐ์ง80g_179~182 page
๋์์ถํ ๋ฒํ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.964332367808258
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 17 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 6.0 | <ul><li>'๋ช
ํจ ๋น์ผ์ ์ ์๋ ์ฌํํ ๊ณ ๊ธ ์ธ์ ์นดํ์ฟ ํฐ ์์ ๋ถ๋์ฐ ํ์ฌ ๋จ๋ฉด ๋น์ฝํ
200๋งค ==์ง๊ฐ์๋ ์ฌ์ง(โผ์๋์์ ์ ํโผ)== ์ฃผ์ํ์ฌ ์คํฌ๋์์ธ๊ทธ๋ฃน'</li><li>'์ ๋จ์ง ์๋ ์ธ์ ์ ์ A5 ๋จ๋ฉด ์ปฌ๋ฌ ์ค๋
ธ์ฐ์ง150 ํํ๋ ๋ฆฌํ๋ ๋น์ผ B5์๋ฉด(16์ )_์ํธ์ง200 ํ๋ฆฐํ
ํฉํ ๋ฆฌ'</li><li>'์ํฌ์ ฏ - ์ปฌ๋ฌ ์ถ๋ ฅ ์ ๋ณธ - A5 ๋ฐฑ์80g ์๋ฉด A3(420ร297ใ) - ์ ๋ณธ๋ถ๊ฐ_๋ฐฑ์ 80g ๋ณต์ฌ์ง_์ปฌ๋ฌ์๋ฉดโณโณ (ํ์ด์งX๊ถ์=์ ์ฒด ์ชฝ์ ์
๋ ฅ) ์ํ๋ ์ค'</li></ul> |
| 2.0 | <ul><li>'[3๊ฐ๋ถํฐ ์ ์] ๋ฐฑ์ํํธ ์๋ธ๋ฌ ๋๊ธ๋ฌ ์ผ์นด๋ ๊ฐ๊ฒฉํ POP ์์ ๋ชจ์ ์ฌ๋จ 100x100mm_Aํ(์ฌ๊ฐ/์ํ/๋ผ์ด๋)_260์ฅ (์ฃผ)์ฌ๋ฆ๊ธฐํ'</li><li>'ํด๋์ฉ ๋ค์ด์ดํธ ํผํ
ํ๋ฆฌ ์ท ์๋ฅ ์ ์ฒด ์ค์ 1.5m ์ด์ก์ค2'</li><li>'ํธ์ํด ๋ฐ์คํฌ๋งคํธ ์ฑ
์๋งคํธ ํจ๋ ๊ฐ์ฃฝ ๋ฎ๊ฐ ๊น๊ฐ ๊นํ S6030 L90X45_๋ธ๋ ํธ์ํ์ค๋'</li></ul> |
| 5.0 | <ul><li>'์นด์์ค ๊ณ์ฐ๊ธฐ MS-20UC ๋ฏธ๋ ๊ท์ฌ์ด ์ปฌ๋ฌ ์ ์๊ณ์ฐ๊ธฐ ๊ณต์์คํ ์ด MS-20UC_ํผํ ์ฃผ์ํ์ฌ ํ๋จํต์'</li><li>'[ํ์ฝ๊ฒ์์ฆ] ํ๋ฆฌ๋ฏธ์ ํฌํ ์นด๋ ์ฌ๋ฆฌ๋ธ ํ๋ 50๋งค 56x87 ํฌ์'</li><li>'ํ ๋ก์ฐ์ฆ ๋ฌธ์ ์ธ๋จ๊ธฐ P-30C 4x34mm ๊ฝ๊ฐ๋ฃจํ ํ์๊ธฐ ์ธ์ ๊ธฐ ํ์งํจ 15L ๋ธ๋ ๋ธ๋ ๋๋ ์ต'</li></ul> |
| 13.0 | <ul><li>'์์ด๋ ์ธ๊ณ์ง๋ ๊ทน์ธ์ฌ ๋ฌ๊ทธ 100ร150cm ์ฌ๊ณ์ - ์์ด๋ฐฉ ๋์ด๋ฐฉ ํค์ฆ ๋งคํธ ์นดํํธ ์์ด๋ณด๋ฆฌ_100ร150cm ์ฃผ์ํ์ฌ ํ๋ธ๋ผ๋ผ์ฌ'</li><li>'์ ๋ฝ์ง๋ ํ๊ธ์๋ฌธ ์ฝํ
- ์๊ตญ ํ๋์ค ์ฌํ ์ธ๊ณ์ ๋ ํฌ์คํฐ ๋ํ(210x150cm) (์ฃผ)๋์ฐ๋งต์ํํธ'</li><li>'ํ๊ตญ 100๋ ๋ช
์ฐ ์ฌ๊ถ ๋ํ๋ฏผ๊ตญ ์ฐ๋ฆผ์ฒญ ๋ฐฑ๋๋ช
์ฐ ์ ์์ ์์ฒฉ ๋ฑ์ฐ์ฌ๊ถ ๊ธฐ๋ก ์ฌ๊ถ+์คํฌํ ์ถ์ต๊ฑฐ๋ฆฌ'</li></ul> |
| 11.0 | <ul><li>'ํฌ๋ฆฌ์ค๋ง์ค ์ ๋ฌผ ์ปคํธ๋ฌ๋ฆฌ ์คํผ ํฌํฌ ์ธํธ 4 ๊ฐ/6 ๊ฐ, ์ํฌ ํธ๋ฆฌ ์ฅ์ ๋์ ํธ ๊ณผ์ผ ์ปคํผ 01 A ์ฑ์ด๋ฌผ์ฐ'</li><li>'์ ์ด์บ๋ค ํํฐ ์ผ์ต์ด๋ถ ์์ผ์ด ๋์ฉ๋ ๋ฒจ๋ฅํฉ ์ฑ๊ธ 100์
01.์ฑ๊ธ ํํฌ ์ฃผ์ํ์ฌ ์กฐ์ค์ธํฐ๋ด์
๋'</li><li>'[ํ
๋ฐ์ดํ
] ์ ๊ธฐ(์์๋๋ค๋ฐ์ก) ์ ๋ก์ฐฌ์ค'</li></ul> |
| 10.0 | <ul><li>'๋ผ๋ฒจ์ง A4๋ผ๋ฒจ์ง ์คํฐ์ปค๋ผ๋ฒจ ํฌ๋ช
๋ฐฉ์ ๋ผ๋ฒจ ๋ ์ด์ ์ฉ 1์นธ(์ ์ง) 20์ฅ ํฐ์๋ฐฉ์(Laser)_8์นธ(2X4) 10์ฅ (์ฃผ)์ ํผ์คํธ'</li><li>'์ข
์ด๋๋ผ ์ฝํ
ํ๋ฆ A3 100mic 100๋งค ๋์ง๊ต์ก ์ฃผ์ํ์ฌ'</li><li>'๋๋ธ์์ด A4 ๋ณต์ฌ์ฉ์ง 75g 2000๋งค ์ ์ด์ค์์ค(J&S)'</li></ul> |
| 4.0 | <ul><li>'๋ง๊ทธ๋คํฑ ์์ ๋ณด๋ 29cm x 56cm ์ฃผ๋ฌธ์ ์ ๊ทธ๋ ์ด_380x760 ์ธ์ด์ ๋ฐ'</li><li>'ํธ์ด์ง๋ณด๋ LCD์ ์๋
ธํธ ๋ฉ๋ชจ๋ณด๋ ๋๋ก์ํจ๋ ์ค๋งํธ ๋ฉ๋ชจ์ฅ 8.5์ธ์น(์ปฌ๋ฌ๊ธ์จ)_ํํฌ ํธ์ด์ง๋ณด๋(Howeasy Board)'</li><li>'๋
ธ๋ฒ ์ ค ํ์ดํธ์ ๋ฆฌ์น ํ 1200x800mm ๋ญ์ ๋ฒฝ๊ฑธ์ด ๊ธ๋ผ์ค๋ณด๋ 1200x800mm ์ผ๋ฐ ์ฃผ์ํ์ฌ ๋ญ์๊ธ๋ผ์ค'</li></ul> |
| 0.0 | <ul><li>'์ค๋ํผ ์ธ๋ฑ์ค๋
ธํธ ๊ทธ๋ ์ด(๋ํธ) ๋ญ๋ญ๋ฐฉ๊ตฌ'</li><li>'3M ํฌ์คํธ์ 653-20A ๋์ฉ๋ํฉ ์ผ๋ฐ ์ ์ฐฉ ์ ์ฐฉ์ฉ ๋
ธํธ 51x38mm ์ ๋น๊ณตํก'</li><li>'์์ฐ๋ถ ์๋ชฉ๋ณดํธ๋ ์๋ชฉ์๋ ๋ธ๋_S ๋ธ๋ฃจ๋ง์ผ'</li></ul> |
| 14.0 | <ul><li>'๋ฐฑ๊ด ๊ฒ์์ ๊ฒ์ ๋ดํฌ ํฐ์๋น๋๋ดํฌ ๊ฒ์ _6ํธ[20๋ฆฌํฐ](36x60+14/200๋งค) ๋ฐฑ๊ด๋น๋์ฐ์
'</li><li>'๋น๋๋ท์ปด ๊ฐ๋๋ดํฌ ์ข
์ด๋ดํฌ ์นดํ ํฌ๋ผํํธ ๋นตํฌ์ฅ 180x110x350 200๋งค ์ข
์ด์ผํ๋ฐฑ/ํ๋ซ(๋ฉ์๋)_๋ฐฑ์ ๊ฐ๊ด์ง_๋/310x125x420/50๋งค ๋น๋๋ท์ปด'</li><li>'[์ฟ ๋ฒ ๋ฅด] ๊ฐ์ฑ ์ธํ
๋ฆฌ์ด ์ฝ์, ํ๋ฆฌ A ๋ฏธ๋ ํฌ์คํฐ ์ฌ์ง ์ดฌ์ ์ํ 5.๋ก๋ง_๋ก๋ง 4์ฅ ์ฟ ๋ฒ ๋ฅด(COUVERT)'</li></ul> |
| 7.0 | <ul><li>'๋ ๋์คํฌํ ์ธ๊ฐ ๋์ฅ ๊ฐ๋ฒฝ์กฐ๋ ์ด์ ์ญ์ฅ์ ๋ ์๊ธฐ ์ด๋ฆ ์ง์ธ ๋จ์ฒด ๋์ฅ ๋ง๋ค๊ธฐ ํ๊ธ๊ณ ์ธ์ฒด ์ ํ 01 - ๊ฐ๋ฒฝ์กฐ๋ ์ด์ _์์_ํ์์ด์์ฒด ๋ ๋์คํฌํ'</li><li>'์๋ ๋ฒํธ ๋๋ฒ๋ง ์คํฌํ ๋งํน ๋ ์ง ๊ฐ๊ฒฉํ ์ฐ๋ ๋์ฅ ์๋ 3 ์๋ฆฌ ์ ์๋_4 LIN SHUZHEN'</li><li>'์์ดํ๋ ์ค ์ฐ๋ง ์คํฌํ ๊ธ๋ฃจ๊ฑด ์ค๋ง์์ค 11mm ๊ธ๋ฃจ์์ค1๋ฒ(๊ธ๋ฃจ) ๊ธ๋ฃจ ์์ค 34๋ฒ(๊ธ๋ฃจ) ์์ดํ๋ ์ค(Aflex)'</li></ul> |
| 16.0 | <ul><li>'๋ธ๋ฃจ์นฉ ์ธํํํต ํด๋ฆฌ์์คํ
๋ฅด ๋์ฉ๋ 14์ธ์ด์์ฌ์ฉ ์ธํํํต(์ค๋ฆฌ๋์์ธ) ์์ด์ค ๋ฌด์ญ'</li><li>'๋ชจ๋๋ฏธ ๋งค์ง ์บก ์ ์ ๋ผ์ธ'</li><li>'๋ชจ๋๋ฏธ ๋ณด๋๋ง์นด ๋ฆฌํด๋ ์์ํฌ 220 ํ ์ฃผ์ํ์ฌ ์์ค์์ด์น๋ชฐ'</li></ul> |
| 15.0 | <ul><li>'ํ๋ฆฌ๋ฏธ์ ์
๋ณด ํ์ผ 10๋งค~60๋งค ๋ฐด๋ํ์ผ ๋ฉ๋ชจ ๊ฐ๋ฅ 40๋งค_๋ธ๋ฃจ(๋ฎค์ง)_์ถ๊ฐ ๋ค๋ค๋ค๊ธฐ์
'</li><li>'A4ํ์ผ ํด๋ฆฝ๋ณด๋ A4ํ์ผ์ฒ ๊ฒฐ์ฌํ ์ธ๋กํ ๊ฐ๋กํ_๊ทธ๋ฆฐ ์ด๋ฆผ ๋ง๋
'</li><li>'[๋ค๋ ์ฝํผ๋ ์ด์
] A4 ํ์ผ ์ฒ ํ์ผ์ผ์ด์ค ๊ทธ๋ ์ด ๋ค๋ ์ฝํผ๋ ์ด์
'</li></ul> |
| 9.0 | <ul><li>'์์๋ชจํฐํ ํฌ์นด๋ฐ์ธ๋ 6๊ณต ํฌํ ์นด๋ ๋ฐ์ธ๋ A5 A5_๋ธ๋_๋ฐ์ธ๋ ์๊ฐ๋ฌธ๊ตฌ'</li><li>'๋ง์ด๋ฌ๋ธ ์ด์ํ์จ๋ฒ ๊พธ๋ฏธ๊ธฐ ๋ฑ๋ ์๊ธฐ ๊ฐ์ธ ์ฌ์ง์ ๋ฆฌ ์
ํํฌํ ๋ถ ์จ๋ฒ ๋จํ ํฌ๋ฆผ ์จ๋ฒ + ๊พธ๋ฏธ๊ธฐ ์ธํธ_๋ง์ด๋ฌ๋ธ์จ๋ฒ: B๊ธํฌ๋ฆผ / ์ฃผ์์คํฐ์ปค: ์ปฌ๋ฌ_ํ ๋ผ๋ + ํ๋กํ ์คํฐ์ปค ์ปคํ๋ฌ์ค'</li><li>'์์๋ชจํฐํ ํฌ์นด๋ฐ์ธ๋ 6๊ณต ํฌํ ์นด๋ ๋ฐ์ธ๋ A5 A6_์ฌ๋ฆฌ๋ธ_๋ฐ์ธ๋ ์๊ฐ๋ฌธ๊ตฌ'</li></ul> |
| 12.0 | <ul><li>'๋กค ํธ๋ ์ฑ์ง ๋กค ํธ๋ ์ด์ฑ์ง idem ํ๋กํฐ ๋ฐํฌ๋ช
์ข
์ด A1 ํ์ด๋น์ฆ์ฝ๋ฆฌ์'</li><li>'์ ํ ์ผ์ ํธ๋ ์ฑ์ง A4 A3 80g 85g ํธ๋ ์ด์ฑ์ง ์ผ์ A3 100๋งค ๋ํฌ์คํ'</li><li>'๋กค ํธ๋ ์ฑ์ง ๋กค ํธ๋ ์ด์ฑ์ง idem ํ๋กํฐ ๋ฐํฌ๋ช
์ข
์ด A3 10. 12์ธ์นx50์ผ๋(ํฐ์) ํ์ด๋น์ฆ์ฝ๋ฆฌ์'</li></ul> |
| 3.0 | <ul><li>'์ข
์ด๋๋ผ ๋น๋น๋ ๋ง๋ธ๋ง๋ฌผ๊ฐ 12์ (1๊ฐ) ๋ฐ์ดํฌ๋น'</li><li>'๋ค์ค๋์ ์์ ์ํ ๊ณ ๋ฆฌ ๋ง๋ ์ฌ๊ฐ ๋ค์ค๋๋ฎด ์ด๊ฐ๋ ฅ์์ 2 x 1mm ์ ํ.4 ๋ค์ค๋์ ์ฌ๊ฐ์์_โ
๋ชจ๋ ์น์์ ๋จ์๋ mm ์
๋๋คโ
์ ๋ง๊ทธ๋คํธ'</li><li>'๋๋ฃจ์ฝ S ์ปคํฐ๋ 10๋งค ์นผ๋ ์ปคํฐ์นผ ์ฌ๋ฌด์ฉ์นผ S์ปคํฐ๋ 10์
์์ด์นํฐ๋น ํธ๋ ์ด๋'</li></ul> |
| 1.0 | <ul><li>'2025 ์์ง ๋ค์ด์ด๋ฆฌ ์ ์ฆ์ด๋ฆฌ 25A ๋ก๊ณ ์ธ์ ์์ฒฉ&ํฌ์ผ๋ค์ด์ด๋ฆฌ_์ธ์คํ์ผ48_๊ทธ๋ ์ด ์ฃผ์ํ์ฌ ์ ์ด์ํ์ต'</li><li>'2025 ์๋ฅด๋์ ๋จผ์ฌ๋ฆฌ ํ๋๋ ํ์ดํธ ๋ฐ์ด๋๋ฆผ ์ง์จ์ด์ปจํ
์ธ (JIWAY Contents)'</li><li>'ํ๋ณด์ฉ ์ํด ๋ฒฝ๊ฑธ์ด ๋ฌ๋ ฅ ํน๋ณ2์ ์ซ์ํ 70๋ชจ์กฐ 1๋ถ ์ ์ ์ซ์ํ ์บ๋ฆฐ๋ ์ฃผ๋ฌธ ์ธ์ ๋ฒฝ๊ฑธ์ด๋ฌ๋ ฅ_2-1 2์ ์ซ์ํ70๋ชจ์กฐ_200๋ถ ์ ์ 1๋ถ ๋จ๊ฐ ์ด๋ฌด๋๋ผ ์ฃผ์ํ์ฌ'</li></ul> |
| 8.0 | <ul><li>'์ ์ผ SINIL ๋ฐ์คํ
์ดํ ๊ฒฝํฌ์ฅ ํฌ๋ช
ํ๋ฐฐ ํฌ์ฅ OPP ๊ณ ์ ์ฐฉ ํ
์ดํ 80m 40๊ฐ SINIL ์คํฌ์ฅ ํฌ๋ช
_50M 50๊ฐ ์ฃผ์ํ์ฌ ์ ์ผ'</li><li>'3M ์ค์นด์น ํฌ๋ช
์๋ฉดํ
์ดํ ๋ฆฌํ 017R 017D ๋ณธํ ์ข์์ฌ๋๋ฌธ๊ตฌ์ ํต'</li><li>'์ฐ๋ฆผ ๋ฐ์ค ํ
์ดํ L3 50M 50๊ฐ ๊ฒฝํฌ์ฅ ํฌ๋ช
OPP ํฌ์ฅ์ฉ ๋ฌ๋ฒํ
์ดํ(๊ฒจ์ธ์ฉ/๋๋์ฉ)_S์๋ฆฌ์ฆ_(๊ฒจ์ธ์ฉ)S1-๋ฌ๋ฒ(40Mx50๊ฐ)ํฉ์ ์ฐ๋ฆผ ๊ฐ์์ง์ฌ'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9643 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh8")
# Run inference
preds = model("ํ๊ณตํ๋ท์ปด ์ธ๊ณ์ง๋ ๋ํ ์๋๋งต ์ธ๊ณ์ง๋03_600x900 (์ฃผ)์ค๋น")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 11.1176 | 26 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
| 10.0 | 50 |
| 11.0 | 50 |
| 12.0 | 50 |
| 13.0 | 50 |
| 14.0 | 50 |
| 15.0 | 50 |
| 16.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0075 | 1 | 0.4622 | - |
| 0.3759 | 50 | 0.3276 | - |
| 0.7519 | 100 | 0.2741 | - |
| 1.1278 | 150 | 0.167 | - |
| 1.5038 | 200 | 0.082 | - |
| 1.8797 | 250 | 0.0368 | - |
| 2.2556 | 300 | 0.0406 | - |
| 2.6316 | 350 | 0.0331 | - |
| 3.0075 | 400 | 0.0282 | - |
| 3.3835 | 450 | 0.0144 | - |
| 3.7594 | 500 | 0.005 | - |
| 4.1353 | 550 | 0.0036 | - |
| 4.5113 | 600 | 0.0036 | - |
| 4.8872 | 650 | 0.0005 | - |
| 5.2632 | 700 | 0.0003 | - |
| 5.6391 | 750 | 0.0003 | - |
| 6.0150 | 800 | 0.0002 | - |
| 6.3910 | 850 | 0.0003 | - |
| 6.7669 | 900 | 0.0002 | - |
| 7.1429 | 950 | 0.0002 | - |
| 7.5188 | 1000 | 0.0001 | - |
| 7.8947 | 1050 | 0.0001 | - |
| 8.2707 | 1100 | 0.0001 | - |
| 8.6466 | 1150 | 0.0001 | - |
| 9.0226 | 1200 | 0.0001 | - |
| 9.3985 | 1250 | 0.0001 | - |
| 9.7744 | 1300 | 0.0001 | - |
| 10.1504 | 1350 | 0.0001 | - |
| 10.5263 | 1400 | 0.0001 | - |
| 10.9023 | 1450 | 0.0001 | - |
| 11.2782 | 1500 | 0.0001 | - |
| 11.6541 | 1550 | 0.0001 | - |
| 12.0301 | 1600 | 0.0001 | - |
| 12.4060 | 1650 | 0.0001 | - |
| 12.7820 | 1700 | 0.0001 | - |
| 13.1579 | 1750 | 0.0001 | - |
| 13.5338 | 1800 | 0.0001 | - |
| 13.9098 | 1850 | 0.0001 | - |
| 14.2857 | 1900 | 0.0001 | - |
| 14.6617 | 1950 | 0.0001 | - |
| 15.0376 | 2000 | 0.0001 | - |
| 15.4135 | 2050 | 0.0001 | - |
| 15.7895 | 2100 | 0.0001 | - |
| 16.1654 | 2150 | 0.0001 | - |
| 16.5414 | 2200 | 0.0001 | - |
| 16.9173 | 2250 | 0.0001 | - |
| 17.2932 | 2300 | 0.0001 | - |
| 17.6692 | 2350 | 0.0001 | - |
| 18.0451 | 2400 | 0.0001 | - |
| 18.4211 | 2450 | 0.0001 | - |
| 18.7970 | 2500 | 0.0001 | - |
| 19.1729 | 2550 | 0.0001 | - |
| 19.5489 | 2600 | 0.0001 | - |
| 19.9248 | 2650 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
olgagasowska/mbart-large-50-many-to-many-mmt-iva_mt-en-pl
|
olgagasowska
| 2024-11-20T15:18:44Z | 115 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:iva_mt_wslot",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-20T15:14:16Z |
---
library_name: transformers
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
datasets:
- iva_mt_wslot
metrics:
- bleu
model-index:
- name: iva_mt
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: iva_mt_wslot
type: iva_mt_wslot
config: en-pl
split: validation
args: en-pl
metrics:
- name: Bleu
type: bleu
value: 8.2225
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iva_mt
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the iva_mt_wslot dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4663
- Bleu: 8.2225
- Gen Len: 14.5833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|
| No log | 0.6154 | 1 | 2.4663 | 8.2225 | 14.5833 |
| No log | 1.8462 | 3 | 2.4663 | 8.2225 | 14.5833 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mav23/pythia-1.4b-GGUF
|
mav23
| 2024-11-20T15:15:38Z | 386 | 0 | null |
[
"gguf",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T15:03:12Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1.4B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | โ |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | โ |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | โ |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. โEquivalentโ
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better โfollowโ human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most โaccurateโ text. Never rely on Pythia-1.4B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1.4B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA โ OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning ChallengeโEasy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1ร their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
outlookAi/MpcozlSWhy
|
outlookAi
| 2024-11-20T15:11:55Z | 329 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-20T14:38:18Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cinematicmovietone
---
# Mpcozlswhy
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cinematicmovietone` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/MpcozlSWhy', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mini1013/master_cate_lh7
|
mini1013
| 2024-11-20T15:09:40Z | 100 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T15:09:17Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: SD ๋ฐ์ด์ค ์์ค๋ ์ฝ๋ํ๋ฆฌ ๋น๋จ๊ฒ์ฌ์ง 4๋ฐ์ค 200๋งค (์ ํจ๊ธฐ๊ฐ 2025๋
03์) ์ฝ๋ํ๋ฆฌ 200๋งค+์์ฝ์ 100๋งค ์ ์์ค๋ฉ๋์นผ
- text: ์ํ์ฒต ์ํํธํด๋ฆญ์ค ์ฑํ๊ธฐ+์ฑํ์นจ 25๊ฐ ์กํฐ๋ธ ํผํฌ๋ง ์ธ์คํดํธ ๊ฐ์ด๋ ๋์
์ฑํ๋ฐ๋ ์ฃผ์ํ์ฌ ๋์์ค์ง์
- text: ๋
น์ญ์ ํ๋น์ํ์ง ๋น๋จ ์ํ์ง ๊ทธ๋ฆฐ๋ฅํฐ 50๋งค ์ํ์ง100๋งค+์ฒดํ์นจ100๊ฐ ์์ฌ์คํ ์ด
- text: HL ์ง๋ฅํฐ ํ๋น์ํ์ง 100๋งค /๋น๋จ์ธก์ ๊ฒ์ฌ์ง ์คํธ๋ฆฝ 1_์ง๋ฅํฐ ํ๋น์ํ์ง 100๋งค+์์ฝ์100๋งค ํฌ์ค๋ผeํ
- text: ๋น๋ ์ธํธ๋ผํ์ธ ์ธ์๋ฆฐ ์ฃผ์ฌ๊ธฐ 1๋ฐ์ค 100๊ฐ 328821[31G 8mm 0.5ml]BD ํ๋๋ค ์ฃผ์ฌ๋ฐ๋ ์ธํธ๋ผํ์ธ2 BD ์ธ์๋ฆฐ 31G
6mm 0.5ml 1๋ฐ์ค(324901) ๋๋ฉ๋์นผ์ต
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9786747905559787
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'ํ๋ฆฌ์คํ์ผ ๋ฆฌ๋ธ๋ ๋ฌด์ฑํ ์ฐ์ํ๋น์ธก์ ๊ธฐ(24๋
1์)์ผ๋ผ์ด๋ธํจ์น1๋งค ๊ฑฐ๋๋ช
์ธ์ ๊ด๋ช
ํฌ์ค์ผ์ด'</li><li>'SD ์ฝ๋ํ๋ฆฌ ํ๋น์ธก์ ๊ธฐ(์ธก์ ๊ธฐ+์ฑํ๊ธฐ+์นจ10๋งค+ํ์ฐ์น)P ์คํ ์ด์ํ'</li><li>'์ฌ๋ฉ๋์ฟ ์ค ๊ธ๋ฃจ์ฝ๋ฅํฐ ํ ํ๋น๊ณ AGM-4100+ํ์ฐ์น+์ฑํ๊ธฐ+์ฑํ์นจ 10๊ฐ ์ ์์ค๋ฉ๋์นผ'</li></ul> |
| 2.0 | <ul><li>'์์ค๋ SD ์ฝ๋ํ๋ฆฌ ์ธก์ ์ง|๊ฒ์ฌ์ง|์ํ์ง 100๋งค(25๋
2์) ๋๋ฉ๋์นผ์ต'</li><li>'๋ฐ๋ก์ฐ ๋น๋จ๊ฒ์ฌ ํ๋น์ํ์ง 100๋งค(50๋งคx2ํฉ) ์ฌ์ฉ๊ธฐํ 25๋
3์ MinSellAmount ์ ๋ํ๋ผ์ด์ค'</li><li>'์ตํฐ์ ํ๋ฆฌ์คํ์ผ ์ผํค์ํ์ง1๋ฐ์ค10๋งค ๊ฒ์ฌ์ง ํ์ค (24๋
8์) ๋ฉ๋ํธ๋ฆฌ'</li></ul> |
| 0.0 | <ul><li>'๋น๋ ์ธํธ๋ผํ์ธ ์ธ์๋ฆฐ ์ฃผ์ฌ๊ธฐ 1๋ฐ์ค 100์
324901 [31G 6mm 0.5ml] BD ํ๋๋ค ์ฃผ์ฌ๋ฐ๋ ์ธํธ๋ผํ์ธ2 BD ์ธ์๋ฆฐ 31G 8mm 3/10ml(0.5๋จ์) 1๋ฐ์ค(320440) ๋๋ฉ๋์นผ์ต'</li><li>'BD ๋น๋ ์ธํธ๋ผํ์ธ ์ธ์๋ฆฐ ์ฃผ์ฌ๊ธฐ ์๋ฆฐ์ง 31G 6mm 1ml 324903 100์
์ฃผ์ํ์ฌ ๋์์ค์ง์ '</li><li>'์ ๋ฆผ ๋ฉธ๊ท ์ผํ์ฉ ์ฃผ์ฌ๊ธฐ 3cc 23g 25mm 100๊ฐ์
๋ฉธ๊ท ์ฃผ์ฌ๊ธฐ 10cc 18G 38mm(100ea/pck) (์ฃผ)์ผ์ด๋์์ฌ'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9787 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh7")
# Run inference
preds = model("๋
น์ญ์ ํ๋น์ํ์ง ๋น๋จ ์ํ์ง ๊ทธ๋ฆฐ๋ฅํฐ 50๋งค ์ํ์ง100๋งค+์ฒดํ์นจ100๊ฐ ์์ฌ์คํ ์ด")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 9.62 | 21 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0417 | 1 | 0.4565 | - |
| 2.0833 | 50 | 0.1836 | - |
| 4.1667 | 100 | 0.1645 | - |
| 6.25 | 150 | 0.0004 | - |
| 8.3333 | 200 | 0.0001 | - |
| 10.4167 | 250 | 0.0001 | - |
| 12.5 | 300 | 0.0 | - |
| 14.5833 | 350 | 0.0 | - |
| 16.6667 | 400 | 0.0 | - |
| 18.75 | 450 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mini1013/master_cate_lh6
|
mini1013
| 2024-11-20T15:06:41Z | 212 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T15:06:18Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ์ ์ธ์ฅ ์ํํธ๋ ์ฆ ๋ ์ฆ์ธ์ฒ๊ธฐ ์๋ ์
ํ ์ธ์ฒ ํ์์ ํ_ํํฌ ์์ด์๋ฒค์์
- text: '[๋ฉ๋ฆฌ๋ฌ]๋ฉ๊ตฌ๋ฆฌ์ฆ/๋ฉ๋ฆฌ๋ฌ ์์ด๋ง์คํฌ ์๋ฉด์๋ 12์
5.์ ์ต์ ์ ์ํฅ 12P ๋กฏ๋ฐ์์ด๋ชฐ'
- text: ๊ต์ฒด์ฉ ์ผ์ด์ค ์ํํธ ์ง๊ฒ ๊ฑฐ์ธ ์ฝํ
ํธ ์ธํธ ๋ธ๋ฃจ ์๊ฐ๋๋
- text: ๋ณด์๋ฅด ์์ด์์ ์ด์ํ ์๊ฒฝ ๋ ์ฆ์ธ์ฒ๊ธฐ ๋์๋ณด์ด์ง ์๋ ๊ฐ์ข
์ธ๊ท 99.7% ์๋ฒฝ์ธ์ฒ ํ์ดํธ U0001 ์ค์ ์ฃผ์ํ์ฌ
- text: ์๋ ์ผ์น์ด ์ฐ์ง 2์ข
๋์ฐ์ง ์ฌํ ์๋ฉด ์บ๋ฆญํฐ ๋ธ๋ ์ ํฌ์
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9615384615384616
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 3.0 | <ul><li>'๊ตฟ๋์ ์จ์ด์๋ ์๋ฉด์๋ ๋์ฐ์ง ๋์ฐ์ง๊ธฐ ๋์ฐ์งํฉ MinSellAmount ์ค์์๋'</li><li>'[๋๊ตฌ๋ฐฑํ์ ] [๋๋ฆฌ์์ด]์๊ตฌ๊ฑด์กฐ์ฆ ์น๋ฃ์๋ฃ๊ธฐ๊ธฐ ๋๋ฆฌ์์ด 5800 (์์์ฉ์ํธ์ง 1๋ฐ์ค ) ๋๋ฆฌ์์ด 5800 ๋๊ตฌ๋ฐฑํ์ '</li><li>'๋๊ตญ์ ์ฝ ๊ตฟ์ ์คํ์๋ 3๋ฐ์ค ์๋ฉด ์จ์ด์๋ (๋ฌดํฅ/์นด๋ชจ๋ง์ผํฅ ์ ํ) 1_๋ฌดํฅ 3๋ฐ์ค_AA ๋๊ตญ์ ์ฝ_๋ณธ์ฌ์ง์'</li></ul> |
| 0.0 | <ul><li>'๋ ์ฆ์ง๊ฒ ๋ ์ฆ ๋ฃ๋ ์ง๊ฒ ๋ผ๋ ๋๊ตฌ ํก์ฐฉ๋ด ์ํํธ ๋ ์ฆ์ง๊ฒ(ํํฌ) ์ฌ๋๋'</li><li>'๋ฉ๋ฃจ๋ฃจ ์๋ฐ์ด ์ํํธ๋ ์ฆ ์ง๊ฒ ์ฐฉ์ฉ ๋ถ๋ฆฌ ๊ธฐ๊ตฌ 1์ธํธ MinSellAmount ์ฒด๋ฆฌํ์ค'</li><li>'์ํํธ ํต ์ผ์ด์ค ๋นผ๋๋๊ตฌ ์ ์ ์ฉํ ํก์ฐฉ๋ด ๋ฝ๋ฝ์ด ๋ณด๊ดํต ํ๋ ๋ ์ฆํต(๋ธ๋ฃจ) ๊ธฐ์๋คํฌ์ต'</li></ul> |
| 2.0 | <ul><li>'์ด์ํ ๋ณํ์ฅ์น ์ง๋๊ธฐ ์๊ธฐ ์ธ์ฒ๊ธฐ ์ง๋ํ ์์ฑ๊ธฐ ์ด์ํ๋ฐ์๊ธฐ ๋ณํ๊ธฐ D. 20-40K1800W (๋น๊ณ ์ฃผํ์) ๋ฉํ๋ชฐ'</li><li>'์ํ ์ด์ํ์ธ์ ๊ธฐ SH-1050 / 28kHz / 1.2L / ์ ์ ํ ์ฃผ์ํ์ฌ ์ ์์ฝ๋ฆฌ์'</li><li>'์ํ ๋์งํธ ์ด์ํ ์ธ์ฒ๊ธฐ ์ธ์ ๊ธฐ SH-1050D ์๊ฒฝ ๋ ์ฆ ๊ท๊ธ์ ์ธ์ฒ๊ธฐ ์์งํ์ดํ
'</li></ul> |
| 1.0 | <ul><li>'ํด๋จผ๋ฐ์ด์ค ์์ผ์ ์ค์ธ์ ์ฝ ์
๋ผ์ธ ์์ผ์ 370ml 20๊ฐ, ๋๋ฆผ ํ๋ ๋ ์ฆ์ฉ ์๋ฆฌ ์์ผ์ ๊ฐ์ด์์ฝ๋ฆฌ์ ํด๋จผ๋ฐ์ด์ค ์์ผ์ 500ml 20๊ฐ ๊ฐ์ด์์ฝ๋ฆฌ์(Gaia Korea)'</li><li>'๋ฆฌ๋ด ์ผ์ํฐ๋ธ 355ml ์จ์ฑ๋์๊ฒฝ์ฒด์ธํ๋ฐฑ์ '</li><li>'๋ฐ์๋กฌ ๋ฐ์ด์คํธ๋ฃจ 300ml ์ ์์ '</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9615 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh6")
# Run inference
preds = model("๊ต์ฒด์ฉ ์ผ์ด์ค ์ํํธ ์ง๊ฒ ๊ฑฐ์ธ ์ฝํ
ํธ ์ธํธ ๋ธ๋ฃจ ์๊ฐ๋๋")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.705 | 19 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0312 | 1 | 0.4002 | - |
| 1.5625 | 50 | 0.064 | - |
| 3.125 | 100 | 0.0021 | - |
| 4.6875 | 150 | 0.0004 | - |
| 6.25 | 200 | 0.0001 | - |
| 7.8125 | 250 | 0.0001 | - |
| 9.375 | 300 | 0.0 | - |
| 10.9375 | 350 | 0.0 | - |
| 12.5 | 400 | 0.0 | - |
| 14.0625 | 450 | 0.0 | - |
| 15.625 | 500 | 0.0 | - |
| 17.1875 | 550 | 0.0 | - |
| 18.75 | 600 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mini1013/master_cate_lh5
|
mini1013
| 2024-11-20T15:03:34Z | 102 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T15:03:10Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ํ์ ์ถฉ์ ์ ์จ์ ์ฐ์ง๊ธฐ ์จ์ด BEST ๋ฒจํธํ ์ ๊ธฐ ์ฐ์งํฉ ๋ฐฐ ํ๋ฆฌ ๋ชฉ ์ด๊นจ ๋ณต๋ ํซํฉ ๋ฒจํธํ_๋ณด๋ผ์ ๊ตฌ๋ฆ๋ชจ์ ์ฃผ์ํ์ฌ ์์ผ๋ฉ๋
- text: '์ถฉ์ ์ ์จ์ ์ฐ์ง๊ธฐ ์จ์ด ์ ๊ธฐ ์ฐ์งํฉ IVB-D1000 ํํฌ '
- text: ๋ฉ์ด์ค ๋ณด์จ ๋ฌผ์ฃผ๋จธ๋ ์ฐ์งํฉ ์จ์ด ํ๋ฆฌ ๋ฐฐ ๋ณต๋ถ ์จ์ ์จ์ฐ์ง ํซํฉ ๋ณด์จ์ฃผ๋จธ๋ 2L ๋ณด์จ๋ฌผ์ฃผ๋จธ๋_1L ๋ธ๋ผ์ด ๋ฉ์ด์ค์ฝ๋ฆฌ์
- text: ์์์์ ค ์ฐ๋ ํฅ ์ฐ์งํฉ ํซํฉ ์ฃผ๋จธ๋ ๋ถ๋ชจ๋ ์ ๋ฌผ 1_์ ํ7 ๊ฝํฅ์ฐ์งํฉ ์์์์ ค123
- text: ์จ๊ฐํ
๋ผํผ ์จ์ด ๋ชฉ ๋ง์คํฌ 5๋งค x 5๊ฐ / ์ปจ๋์
์ผ์ด 1.์จ๊ฐํ
๋ผํผ ์จ์ด ๋ชฉ ๋ง์คํฌ 5๋งค์
x 5๊ฐ ๋ผ์ด์จ์ฝ๋ฆฌ์ ์ฃผ์ํ์ฌ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9710382513661202
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'ํ์ฌ ๋
์ผ ๋ณด์จ ๋ฌผ์ฃผ๋จธ๋ ๋
ธ์ปค๋ฒ ๊ธฐ๋ณธํ ์ปค๋ฒ ์ฒดํฌ ํํฌ๋ค์ด๋น ์ฃผ์ํ์ฌ ํ์ด์ ๋ก'</li><li>'ํ์ฌ ๋
์ผ ๋ณด์จ ๋ฌผ์ฃผ๋จธ๋ ๋
ธ์ปค๋ฒ ๊ธฐ๋ณธํ 3.๋
ธ์ปค๋ฒ ๊ธฐ๋ณธํ ๋ ๋ ์ฃผ์ํ์ฌ ํ์ด์ ๋ก'</li><li>'๊ผผ๋ ์ ๊ตญ์ฐ ํ๋ฆฌ๋ฏธ์ ์จ์ด ํฉํ ์๋ฉด ๋ ์จ ์ด๊นจ ์ฐ์ง๊ธฐ ์ฐ์งํฉ ๋ชฉ ๋ฑ ์ฐ์ง ์ฟจ๋งคํธ ํ๋ฆฌ์ฐ์ง๊ธฐ(๊ทธ๋ ์ด) BH์คํ ์ด'</li></ul> |
| 0.0 | <ul><li>'ํ์ ์จ์์ฐ์ง๊ธฐ ๋ฐํฌ ํ์ฐ์น ํ์_SET ๋ฐํฌํ์ฐ์น+๋ณต๋ ํ์์๋ฃ๊ธฐ'</li><li>'ํ์ ์จ์์ฐ์ง๊ธฐ ๋ฐํฌ ํ์ฐ์น ๋ธ๋_๋ฐํฌ ๋ฐ์ฐ์ง๊ธฐ ํ์์๋ฃ๊ธฐ'</li><li>'๊ฒ๋ฅด๋ง๋ ์ ๊ธฐ์ฐ์ง๊ธฐ ๋ธ์ง๊ธฐ ์ฐ์ง๊ธฐ ์ฐ์งํฉ ์ ๊ธฐ๋ฉํธ ํ๋ฆฌ ๋ฐฐ ๋ฌด๋ฆ ์ฐ์ง MinSellAmount ์ค๋ง์ผ๋ฐฐ์ก'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9710 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh5")
# Run inference
preds = model("์ถฉ์ ์ ์จ์ ์ฐ์ง๊ธฐ ์จ์ด ์ ๊ธฐ ์ฐ์งํฉ IVB-D1000 ํํฌ ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 10.73 | 20 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0625 | 1 | 0.3748 | - |
| 3.125 | 50 | 0.0002 | - |
| 6.25 | 100 | 0.0 | - |
| 9.375 | 150 | 0.0 | - |
| 12.5 | 200 | 0.0 | - |
| 15.625 | 250 | 0.0 | - |
| 18.75 | 300 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mini1013/master_cate_lh4
|
mini1013
| 2024-11-20T15:01:58Z | 220 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T15:01:35Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ๋์์ ์ฝ ๊ฐ๊ทธ๋ฆฐ ์ค๋ฆฌ์ง๋ ๊ฐ๊ธ 750ml (1๊ฐ) ๊ฐ๊ทธ๋ฆฐ ์ค๋ฆฌ์ง๋ 820ml L์คํ ์ด
- text: ์ค์ผ๋ง ์
๋์ ์ค์ผ์ผ๋ฌ ์น์์ ๊ฑฐ๊ธฐ ๊ตฌ๊ฐ์ฒญ๊ฒฐ๊ธฐ ์น์ ๋ณ์ด ๋น๋๋ ํ๋ ๋ณด๋ผ์ ์ฌ์น(sachi)
- text: ํ
๋ธ๋ฌ์ฌ 4๊ฐ์ธํธ ํํด๋ฆฌ๋ ์
๋์์ ๊ฑฐ ํ๋ฐฑํ์ ๊ฑฐ ํ์นซ์ i MinSellAmount ํํค๋ณด์ด
- text: '[๊ฐค๋ฌ๋ฆฌ์] ํด๋ฆฌ๋ดํธ ์์น ๋ถ์ฐฉ์ฌ ๋ฏผํธํฅ 70g x5๊ฐ ํํ๊ฐค๋ฌ๋ฆฌ์(์ฃผ)'
- text: ์ ํฐ๋ฏธ ์น์ฝ ํ๋กํด๋ฆฌ์ค 200g ์
๋์ ์ ๊ฑฐ ๋ฏธ๋ฐฑ ์ฝ๋ง ํ๋ ์ค์ธ๋ธ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9477272727272728
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 9.0 | <ul><li>'๋กค๋ฆฌํ ์๋์จ ํญ๊ท ํํด๋ฆฌ๋ 4์ข
ํผํ ํ๋์๋'</li><li>'ํ
๋ธ๋ฌ์ฌ ํํด๋ฆฌ๋ ์
๋์์ ๊ฑฐ ๋ฐฑํ์ ๊ฑฐ ํ์นซ์ MinSellAmount ํํค๋ณด์ด'</li><li>'[์ํ๋๊ฐ] ํํด๋ฆฌ๋ ์ธํธ ๊ทธ๋ฆฐ2๊ฐ+๋ค์ด๋น2๊ฐ ์ฃผ์ํ์ฌ ์ํ๋๊ฐ'</li></ul> |
| 2.0 | <ul><li>'์
ํ ๊ฐ์ ์ฉ ์คํ
์ธ๋ ์ค ์ค์ผ์ผ๋ง ์น์์ ๊ฑฐ๊ธฐ ์ฒญ์ ๋๊ตฌ 304 ํํฌ 6์ข
์ธํธ ์ฃผ์ํ์ฌ ํด๋ผ์ฐ๋'</li><li>'๋๊ตฌ ์น๊ฒฝ ์ ๊ฑฐ ํธ๋์ ์ ๊ฑฐ๊ธฐ ์
๋ฅ ํธ๋๊ฒฐ์ ๋ชฉ๋ฅ ์
ํ ๋ฐ๊ด ๊ท๊ฑธ์ดx์๋ฉํจ ๋ก์์ฐํฐ์๊ณ '</li><li>'์ํ ๊ตฌ์ทจ ์ธก์ ๊ธฐ ํ
์คํธ๊ธฐ ํด๋์ฉ ๋์ ์
์ทจ ํธํก ๊ตฌ๊ฐ ์
๋์์ธก์ ๊ธฐ ์๊ฐ์ง๋จ ์๊ฐ ๊ฐ์ค ํ์ค๋ชจ๋ธ _ ๊ฒ์ ํ๋ณต์ด์ง'</li></ul> |
| 0.0 | <ul><li>'์กด์จ์ค์กด์จ ๊ตฌ๊ฐ์ฒญ๊ฒฐ ๋ฆฌ์คํ
๋ฆฐ ์ฟจ๋ฏผํธ 250ml ํ๋ ์ฌ๋ฒ์คํธ 250ml - 1๊ฐ ๋์ํฌ์ฝ๋ฆฌ์'</li><li>'์ผํ์ฉ ์ฌํ์ฉ ๊ฐ๊ทธ๋ฆฐ ๋ผ์10g 1๊ฐ ํด๋์ฉ ์ค๋ฆฌ์ง๋ ๊ฐ๊ธ์คํฑ ์ค๋ฆฌ์ง๋ 1๊ฐ ์๊ทธ๋ฆฐ์คํ์ด์ค'</li><li>'๊ฐ๊ทธ๋ฆฐ ์ ๋ก 1200ML ์ฑ1day๋ฐฐ์ก'</li></ul> |
| 4.0 | <ul><li>'ํฌ์ค๋
ธํธ ํ์ดํธ๋๊ฒ ํ๋ฃจ 2๋ฒ 30๋ถ ํฌ์๋ก ๋๋ฐ์ด๋ฅผ ํ์๊ฒ ํฌ์ค๋
ธํธ ํ์ดํธ๋๊ฒ 2์ฃผ๋ถ ์ฃผ์ํ์ฌ ๋ค์ด์ฒํด'</li><li>'๋ฃจ์นํ ๋ก ๋ฏธ์คํฑํฌ๋ ์คํธ ์น์ฝ 180g 5๊ฐ ์๋ผ์ด๋ธํฉํ ๋ฆฌ'</li><li>'๋ํ ์น์๋ชจํ ์น์ ๋ชจ๋ธ ๊ตฌ์กฐ ์ธ์ฒด ๊ตฌ๊ฐ ์น๊ณผ C. ๊ตฌ๊ฐ 2๋ฐฐ ํ๋(ํ์ ์ ๊ฑฐ ๊ฐ๋ฅ) ๋ง์ผ ์คํ์ด์คํ ๋ผ'</li></ul> |
| 8.0 | <ul><li>'๋ฏธ์๋ดํ ๊ต์ ์ฅ์น๋ณด๊ดํจ ๊ต์ ๊ธฐ์ผ์ด์ค ๊ต์ ๊ธฐํต ๊ต์ ๊ธฐ๋ณด๊ดํจ-์๋ก์ฐ (์ฃผ)ํกํก๊ทธ๋ฆฐ'</li><li>'์ฑ์ฌ ๋ดํธํฌ๋ฆฐ ํ๋์ธ์ ์ 36๊ฐ์
2๊ฐ ๊ต์ ๊ธฐ ์ธ์ฒ ํฌ๋ง๋ฉ๋'</li><li>'ํด๋ฆฌ๋ดํธ ๋งฅ์ค ์ฐ ์์น ๋ถ์ฐฉ์ฌ(์์น ์ ์ฐฉ์ ) 70gx5๊ฐ+์ํ 1๊ฐ ๋๋ง์ผ'</li></ul> |
| 6.0 | <ul><li>'๋ฐฑ์ ์ ์ํ์นซ์ ๋ฒ ์ด์ง ์คํค 10P ์ํ'</li><li>'์ผํธ์นซ์ ํด๋์ 6๊ฐ์
๋ถ๋๋ฌ์ด ์นซ์ ๋ฏธ์ธ๋ชจ ์น์๊ด๋ฆฌ ์ด๊ธ๋ ์น๊ณผ์นซ์ ์ผํธ ํด๋์ 6๊ฐ_์ผํธ ํ ์ด๊ทน์ธ๋ชจ 1๊ฐ(๋๋ค)_์น๊ฐ์นซ์ 8๊ฐ์
1์ธํธ(๋ ๋ 0.7mm) (์ฃผ)์ง๋ก์ธํฐ๋ด์
๋'</li><li>'์ฟค๋ฌ ๋ฅ ํด๋ฆฐ ํ๋ ฅ ํญ๊ท ์ด์ค๋ฏธ์ธ๋ชจ ์นซ์ ๋ถ๋๋ฌ์ด๋ชจ, 16์
, 1๊ฐ ๊ตฌ๋ถ : ๋ถ๋๋ฌ์ด๋ชจ ์ํ
๋ฐฐ์ก'</li></ul> |
| 3.0 | <ul><li>'์ค๋๋น P&G ์์ค์น์ค ๋ฏผํธํฅ 50m 01.์์ค ์น์ค ๋ฏผํธํฅ 50m TH์์ฌ'</li><li>'์ค๋๋น C์ํ ์ผํ์ฉ ์น์ค 30๊ฐ์
1ํฉ NEW)์น์คC์ 30๊ฐ์
[O121] ํ๊ตญํผ์ค์งํ๋งค์ ํํ์ฌ'</li><li>'์ค๋๋น ์์ค์น์ค (50m 1๊ฐ) ๋ฏผํธ ๋์์ง์ ํต'</li></ul> |
| 5.0 | <ul><li>'LG์ํ๊ฑด๊ฐ ์ฃฝ์ผ ๋ช
์ฝ์ ๊ณจ๋ ํ๋กํด๋ฆฌ์ค ์น์ฝ ํ๋ฌ์ค 120g MinSellAmount ์ค๋๋์ฐ๊ตฌ์'</li><li>'์์ง์ํ๊ฑด๊ฐ ์ฃฝ์ผ ์๋ชธ๊ณ ์น์ฝ 120g 1๊ฐ ์ ๋์ค'</li><li>'์ผ์๋ค์ธ ์ค๋ฆฌ์ง๋ ํ๋ฌ์ค ์น์ฝ 100g 1๊ฐ dm ๋ค์์ปค๋จธ์ค'</li></ul> |
| 7.0 | <ul><li>'[์ ํ์ํ]๋ฅํฐ๋ฒ๋ค ์น์ฝ+์นซ์ ์ฌํ์ฉ์ธํธ 6๊ฐ ์ ์ธ๊ณ๋ชฐ'</li><li>'[์ ํ์ํ]๋ฅํฐ๋ฒ๋ค ํด๋์ฉ ์นซ์์น์ฝ์ธํธ 1๊ฐ ์ ์ธ๊ณ๋ชฐ'</li><li>'ํฌํค ํด๋์ฉ ์นซ์ ์น์ฝ ์ผ์ด์ค ์บก์ํ ๋ต๋กํ ํฌํค์ฉ ์นซ์ํต ๋ณด๊ดํจ ํ๋ ์นซ์์บก ์บก์์นซ ํ์ดํธ๋ธ๋ฃจ ์ญ๋ฆฌ๋น'</li></ul> |
| 1.0 | <ul><li>'์ผ์ ํ์๊ธฐ์ต ๋ง์ฐ์คํผ์ค ์๋ฆฌ๋์พ'</li><li>'ํ์ฉ ์ฝ๊ณจ์ด ๋ฐฉ์ง ์ฉํ ๋์ฑ
์ฉ ๋ง์ฐ์คํผ์ค 8 ๊ฐ ์ธํธ ์ด์์ด๋ฆฌํ
์ผ(EY๋ฆฌํ
์ผ)'</li><li>'์ด๊ฐ์ด๋ฐฉ์ง ์น์ ์๋ ๋ณดํธ ์ ์ง ์
ํ ๋ง์ฐ์คํผ์ค ๊ต์ 2๋จ๊ณ ์ฝ์ค๋ชจ์ค'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9477 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh4")
# Run inference
preds = model("์ ํฐ๋ฏธ ์น์ฝ ํ๋กํด๋ฆฌ์ค 200g ์
๋์ ์ ๊ฑฐ ๋ฏธ๋ฐฑ ์ฝ๋ง ํ๋ ์ค์ธ๋ธ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 10.026 | 23 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0127 | 1 | 0.4686 | - |
| 0.6329 | 50 | 0.2751 | - |
| 1.2658 | 100 | 0.1179 | - |
| 1.8987 | 150 | 0.0739 | - |
| 2.5316 | 200 | 0.0687 | - |
| 3.1646 | 250 | 0.0466 | - |
| 3.7975 | 300 | 0.0591 | - |
| 4.4304 | 350 | 0.0232 | - |
| 5.0633 | 400 | 0.0125 | - |
| 5.6962 | 450 | 0.0134 | - |
| 6.3291 | 500 | 0.0152 | - |
| 6.9620 | 550 | 0.0175 | - |
| 7.5949 | 600 | 0.0118 | - |
| 8.2278 | 650 | 0.007 | - |
| 8.8608 | 700 | 0.0003 | - |
| 9.4937 | 750 | 0.0002 | - |
| 10.1266 | 800 | 0.0001 | - |
| 10.7595 | 850 | 0.0001 | - |
| 11.3924 | 900 | 0.0001 | - |
| 12.0253 | 950 | 0.0001 | - |
| 12.6582 | 1000 | 0.0001 | - |
| 13.2911 | 1050 | 0.0001 | - |
| 13.9241 | 1100 | 0.0001 | - |
| 14.5570 | 1150 | 0.0001 | - |
| 15.1899 | 1200 | 0.0001 | - |
| 15.8228 | 1250 | 0.0001 | - |
| 16.4557 | 1300 | 0.0001 | - |
| 17.0886 | 1350 | 0.0001 | - |
| 17.7215 | 1400 | 0.0001 | - |
| 18.3544 | 1450 | 0.0001 | - |
| 18.9873 | 1500 | 0.0 | - |
| 19.6203 | 1550 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
prithivMLmods/Llama-3.2-1B-GGUF
|
prithivMLmods
| 2024-11-20T15:01:35Z | 210 | 7 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"ollama",
"fast-llama",
"1B",
"meta",
"meta-llama",
"token",
"text-generation",
"en",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-26T13:00:49Z |
---
base_model:
- meta-llama/Llama-3.2-1B
language:
- en
license: creativeml-openrail-m
tags:
- text-generation-inference
- transformers
- llama
- gguf
- ollama
- fast-llama
- 1B
- meta
- meta-llama
- token
pipeline_tag: text-generation
---
# Llama-3.2-1B-GGUF [ GPT-Generated Unified Format ]
| File Name | Size | Description |
|------------------------|----------|-------------------------------------------------------|
| `.gitattributes` | 1.73kB | Git configuration file specifying attributes and LFS rules. |
| `gguf.gif` | 6.0kB | GIF file showing visual content related to the project. |
| `Llama-3.2-1B.F16.gguf`| 2.48GB | Full precision 16-bit float model file for Llama 3.2 1B. |
| `Modelfile` | 1.95kB | Model-specific file containing metadata or configurations. |
| `README.md` | 3.91kB | Markdown file with project information and instructions. |
| `config.json` | 29B | JSON configuration file for setting model parameters. |
| Attribute | Description |
|-------------------------|-----------------------------------------------------------------------------|
| **Developed by** | prithivMLmods |
| **License** | Apache-2.0 |
| **Finetuned from model**| meta-llama/Llama-3.2-1B |
Here's a well-structured README.md documentation for running models with Ollama:
# Run with Ollama ๐ฆ
## Overview
Ollama is a powerful tool that allows you to run machine learning models effortlessly. This guide will help you download, install, and run your own GGUF models in just a few minutes.
## Table of Contents
- [Download and Install Ollama](#download-and-install-ollama)
- [Steps to Run GGUF Models](#steps-to-run-gguf-models)
- [1. Create the Model File](#1-create-the-model-file)
- [2. Add the Template Command](#2-add-the-template-command)
- [3. Create and Patch the Model](#3-create-and-patch-the-model)
- [Running the Model](#running-the-model)
- [Sample Usage](#sample-usage)
## Download and Install Ollama๐ฆ
To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.
## Steps to Run GGUF Models
### 1. Create the Model File
First, create a model file and name it appropriately. For example, you can name your model file `metallama`.
### 2. Add the Template Command
In your model file, include a `FROM` line that specifies the base model file you want to use. For instance:
```bash
FROM Llama-3.2-1B.F16.gguf
```
Ensure that the model file is in the same directory as your script.
### 3. Create and Patch the Model
Open your terminal and run the following command to create and patch your model:
```bash
ollama create metallama -f ./metallama
```
Once the process is successful, you will see a confirmation message.
To verify that the model was created successfully, you can list all models with:
```bash
ollama list
```
Make sure that `metallama` appears in the list of models.
---
## Running the Model
To run your newly created model, use the following command in your terminal:
```bash
ollama run metallama
```
### Sample Usage
In the command prompt, you can execute:
```bash
D:\>ollama run metallama
```
You can interact with the model like this:
```plaintext
>>> write a mini passage about space x
Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration.
With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in
the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have
demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented
efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes
increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without
sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X
plays a pivotal role in pushing the boundaries of human exploration and settlement.
```
---
## Conclusion
With these simple steps, you can easily download, install, and run your own models using Ollama. Whether you're exploring the capabilities of Llama or building your own custom models, Ollama makes it accessible and efficient.
- This README provides clear instructions and structured information to help users navigate the process of using Ollama effectively. Adjust any sections as needed based on your specific requirements or additional details you may want to include.
# Llama-3.2-1B-GGUF

Youโre now ready to run your own model with Ollama!
๐ฆ - https://youtu.be/_9IcVFuql2s?si=0NjKlJ1GDCmuJHvQ
|
mini1013/master_cate_lh2
|
mini1013
| 2024-11-20T14:46:23Z | 292 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T14:46:01Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ์ ๊ธฐ ์คํํด๋น๊ธฐ ์๋ ๋ฐฐ๊ด ๋ํ๋ฐฉ์ง ๊ณ ์จ ๊ณต๊ตฌ ์คํ ๊ณ ์ฑ๋ฅ ๊ณ ์ 2500W ๋์งํธ 7์ ์ธํธ 2500W ์ฐ์
์ฉ ์จ๋์กฐ์ 7์ข
์ธํธ+์๋ฉํจ
ํ๋๋น๋ฆฌ์ง
- text: ์คํ
๋์ฌ๋ชป ๋ชฉ์ฌ ํผ์ค ๋ชฉ๊ณต ์ฒ ํ ๋์ฌ ์ง๊ฒฐ ์์ค๋จธ๋ฆฌ 4-13(25๊ฐ) 11. ์คํ
ํธ๋ผ์ค๋จธ๋ฆฌ ๋ณผํธ_M5-40 (5๊ฐ) ๋ฆฌ๋ํ์ค๋
- text: ์์ ๋ดํฌ ํ๋ฐฐ ํฌ์ฅ ๋ฝ๋ฝ์ด 0ํธ 100X100+40 10๋งค ์๋ ์ฃผํฉ [๋น์ ์ฐฉ] ํฌ๋ช
์์ด์บก ๋ดํฌ - 0.2T_18ํธ 250x350
10๋งค ์ฃผ์ํ์ฌ ์ด๊ณ ๋ค(IGODA CO. ,Ltd.)
- text: ํ ๋ค์ด๋ ๋ค์ด์๋ชฌ๋ ์ต์ฐฉ์ฝ์ด๋นํธ ํด๋ฆฌ์ฑ ๋๋ฆฌ์ ์ฒ๊ณต TQ5 57_TTC 17 ์ฃผ์ํ์ฌ ํฌ์์ค
- text: ํนํ ๋ ํธ๋์์ผ ๋ณต์ค์ 233504M 2. ๋กฑํธ๋์์ผ(์ก๊ฐ)_2-21 323513M 3/8x13mm ์ ๋ก๋์ธ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.6113686482182797
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 19 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'์ฅ์ธ๋ค์๋๊ตฌ๋ค ์คํ ํผ๋ ๋๊ด์ปคํฐ๊ธฐ 4๋ถ์1 (๋นจ๊ฐ) ์๋์์ค์ปดํผ๋(LDS Company)'</li><li>'๋๋ง ๋ฌผํฑํฌ ๋์ฉ๋ ํ๋ผ์คํฑ ์ฐ์ ๋ฌผ์ ์ฅ ๋ฐฐ๋ด ํฑํฌ 25kg (๋ด๋ถ ์ปค๋ฒ ํฌํจ) ๋ฏธ๋คํ์ด์ง'</li><li>'์ธ์ํธ ๋นํธ ์ธํธ, 9๊ฐ ์์ค์ ์ต'</li></ul> |
| 18.0 | <ul><li>'[๋ค์ด์๋ฐ์ค] ์คํํ ์ด์ฌ๋ฐ์ค ๋จํ๋ผ ์ด์ฟ์ง ํ๋ผ์คํฑ ์ด์ฌ์ฉ 15. (๊ณ ๊ธํ) ์คํ๋ฐ์ค_(๊ณ ๊ธํ) ์คํ ์ฒญ์ ์ฃผ์ํ์ฌ ์์ฑ์๋ฃจ์
'</li><li>'PE ๋กค๋น๋ ํต๋น๋ 0.03mm 12cm x 457m 0.1mm_์ ํ29. (ํญ100cm x ๊ธธ์ด91m) ์ฃผ์ํ์ฌ ์ ์ผ์ฌ๋จ'</li><li>'ํต์ต ๊ณต๋จ๋ณด์๊ธฐ 45cm 10์ฅ๋ฌถ์ 65ร65 -10์ฅ ๋ฌถ์_์ฐ๋
ธ๋ ํต์ต'</li></ul> |
| 5.0 | <ul><li>'๋ง๋ผ๋ค ์ถฉ์ ์ง์ DJV184Z ๋ณธ์ฒด 18V ์๋์กฐ์ ๋ธ๋ฌ์ฌ๋ฆฌ์ค ์ถฉ์ ์ง์ ๋์ ๋๋ง์ผ'</li><li>'[UDT] ์์ด ํด๋จธ ํจ๋ง ์น์ฆ4๊ฐ ํฌํจ UD-1042 ์ ํ3. ์์ดํด๋จธ (UD-1043) ์ฃผ์ํ์ฌ ๋ง์ดํฌ๋ก์ ์์ค'</li><li>'ํํฌ๊ธฐ ๊ธฐํฌ์ ๊ฑฐ ๋ ์ง ๊ณต์ ์ค๋ฆฌ์ฝ ๋ชฐ๋ ์ง๊ณต ํ์ฅํ 19L ์ง๋ฆ30Cm ๋์ด 30Cm ํ ๋งํก'</li></ul> |
| 4.0 | <ul><li>'์นจ๋ ๋๊ฐ ์ด๋ฅด์ ํ์ ๊ธฐ์ ๋ณด์กฐ ํ๋ ์ A. ํ์๊ฐ ์ข์ ํ๊ฑธ์ด ์์๋ก'</li><li>'์ฒญ์์ค ์ฃผ์ฐจ๊ธ์ง ์
๊ฐํ Aํํ์งํ ๊ณต์ฌ์ค ๋ฏธ๋๋ผ์ฃผ์ ์ฃผ์ฐจ๊ธ์ง ์ฌ์ํ'</li><li>'์คํ ์ค ๊ณ ๊ธ ํฌ๋ช
๋ณด์๊ฒฝ 618A ๋์ ์์ผํ๋ณด ๊ธํ๋ฐฉ์ง ๋ฌด๊ฒ 48.4g ๊ฐ๋ฒผ์ด ๋ณดํธ์๊ฒฝ OTOS ๋ณด์๊ฒฝ B-618ASF ์ง์ ํธ๋ฆฌ (GMTree)'</li></ul> |
| 14.0 | <ul><li>'ํ ์์
์คํ
M6 1๊ฐ ๋ฑ๊ฐ ํ๋งค ์ ์ด์์ปค๋จธ์ค'</li><li>'ํ๋์ง ํ๋ ์ง ์คํ
ํ๋ ์น๋ํธ m3 (10๊ฐ) 5/16์ธ์น (5๊ฐ) ์ผ์ง๊ธ์'</li><li>'ํฌ๋ ์คํ๋งํด๋จํ ์ง๊ฒ ํธ๋ ์๋ฅด์ ์ผ ์กฐ๋ช
๊ณ ์ 3201-HT ๋ฒ ์ธ์ด XM3 ์ฃผ์ํ์ฌ ์ข
์ํด๋ง'</li></ul> |
| 8.0 | <ul><li>'ํ๋กํ ๋ฏธ๋์์น ๋จ์ 220V ํธ๋ฐ ์์ด์ด์์น PW160 2ํ_160kg_PW160 ์ฃผ์ํ์ฌ ๋ค๋ณด์ธํฐ๋ด์
๋(DAVOInternational Co.,Ltd.)'</li><li>'๋์ํธ ์ญ ๋ฆฌํํธ DWHT83550 ๋ง๋ฅ ์ง๋ซ๋ ์ฎ๊ธฐ๊ธฐ ๋์ฅ๊ณ ๊ฐ๊ตฌ ์ํ ํธ๋ 01.๋์ํธ์ญ๋ฆฌํํธ(์ค์ฌ๊ณ X ์
๊ณ ์ ์ถ๊ณ ) ์ฃผ์ํ์ฌ ๋์ฑํด (dongsungtool)'</li><li>'์ฌ๋ค๋ฆฌ ์ ์ด์ ๊ฐ์ ์ฉ 5๋จ aํ ์์ ๊ดํญ ์์
๋ฐํ ๋
ธ๋ NLS05 ๊ฒฝ๋์ฌ๋ค๋ฆฌ_NLL02 2๋จ ๊ณต๊ตฌ ์ ํธ(Z)'</li></ul> |
| 0.0 | <ul><li>'3M ์คํฐ์ง ํํ 7136270 02601(600~800๋ฐฉ) (์ฃผ)ํ๋ฉ์ํํธ'</li><li>'์ฅํ ํํ๊ธฐ ์๋ ์นผ ๋์ดํ ์ปทํฐ ์ปคํ
๊ธฐ PVC ๋ฐ๋ฅ ๋จ์ผ ์ํ ๋์ธํฌ๋ฆฌ์ธ๋ธ'</li><li>'์ค๋งํ ๋กฑ๋ผ์ณ๋ ์น ํ์ชฝ๋กฑ L 13x17L ์ค๋งํ ๋กฑ๋ผ์ณ๋ ์น (ํ์ชฝ๋กฑ)L 13x17L ๋์์ด์คํ ์ด'</li></ul> |
| 6.0 | <ul><li>'๊ทธ๋ฆฐ์์ค ๋ฌด์ ์ ์ ๊ธฐ 40V GMAX ๋ณธ์ฒด 01. ๋ณธ์ฒด ๋ค์ด์ค'</li><li>'D&D PowerDrive BP78 V ๋ฒจํธ 134247 ํธ๋ฆฌ๋ํฐ ์๋ด ๋ก์ฐ'</li><li>'ํผ๋ค 4ํ์ ์์ด๊ธฐ ์
๋ฌ๋ฆฌํฝ ๋ฐฐ๋ญ ํ์ ๋ฒ์ด๊ธฐ ์ ์ด๊ธฐ ์ธํธ ๋ฐฐ๋ญํ 2 GX35 ๋ฐฑํฉ 18.8KW ๋ํ๋ชฐA'</li></ul> |
| 12.0 | <ul><li>'๋์ํธ ์ถฉ์ ์ํํฑ DCS573N 184mm (18V/20V) ๋ณธ์ฒด๋ง ๋์ํธ ์ถฉ์ ์ํํฑ DCS573N 184mm ( ์ต์ ๊ฐ์ ํต๋ชฐ'</li><li>'๋์ํธ DCG405P2 20V ๋ธ๋ฌ์ฌ๋ฆฌ์ค ์ถฉ์ ๊ทธ๋ผ์ธ๋ (5.0Ahx2) ํ์ธํธ ์ฃผ์ํ์ฌ ๋ถ์ผํด๋ฆญ์ค'</li><li>'์ถฉ์ ์ง์ DJV184Z ๋ณธ์ฒด 18V ์๋์กฐ์ ๋ธ๋ฌ์ฌ๋ฆฌ์ค ์ถฉ์ ์ง์ ๋ง๋ผ๋ค ๋ฆฌ์จ์ ํต'</li></ul> |
| 11.0 | <ul><li>'์ฃผํผํฐ ์ฅฌํผํฐ ๋ฌด์ ์กํ๊ธฐ ๋์ฝ์ฒญ์๊ธฐ JUB-18DE ์์ด ์ถฉ์ ์ 18V ๋์ํธ ๋ฐฐํฐ๋ฆฌํธํ ์ด์์คํด'</li><li>'ํ์ดํ๋๋ฆด๋งจ ์คํค๋น๋ํ ์ผ๋ ์
ํํฐ๋งจ ์
๊ทธ๋ ์ด๋ ์คํผ์ปค๋งจ ํ๋ํ์ดํํด๋ฝํนํฉ๊ธ๋๊ฒ-536์
์ ์์ด์ 117'</li><li>'ํซ๋๋ฆด ๋ณด์ ํ๋น
๋ถ์ด๋ ํ ๋ฐฉ๋ฒ ๊ฑด ์ท ๋ค๋ฆผ์ง ํฌ์ธํธ ์๋ฆฌ ์ฒ ํค๋ ํจ์
์๋จ ๊พธ๋ฏธ๊ธฐ 06 ํํฌ ์ํํค๋ 7๊ฐ์ธ๋ํค๋ ์์ ์ผ๊ฐ ๊ณต์ฃผ&ํํฐ'</li></ul> |
| 2.0 | <ul><li>'๋ณด์ผ๋ฌ ์จ์ ์ํ ํํ ๋ชจํฐ ์๋ก ์์ง PH 045M ๋๋์ค๋ชฐ'</li><li>'์์ฃผํ์๊ด 3k์ฃผํ์ฉ ์ง์์ด(GY)์ฐ์
'</li><li>'์ํฐํํ ๋ฌผํํ ๋ค์ด์ํ๋จ ์ํ ๋ฏธ๋ ์์ค ๊ณ ์ด ๊ณ ์จ DC 12V 30W 04 ์ํฐํํ(๊ณ ๊ธํ-๋ํธ) 80W 12V ์ฃผ์ํ์ฌ ์ธ์ธํฐ์ํ'</li></ul> |
| 15.0 | <ul><li>'ํด๋์ฉ ์๋ ์นด์ดํฐ๊ธฐ ์ธ์์ฒดํฌ ์ด๋๋ ์ฒดํฌ ๊ณ์๊ธฐ ๊ฐ์ฑ๋
'</li><li>'ํฐํผ๋งํฌ TP-LINK Tapo T315 ์ค๋งํธ ์จ์ต๋ ์ผ์ ๋์คํ๋ ์ดํ Wi-Fi ํ๋ธ ์ด ์์ดํฐ ์ปค๋จธ์ค'</li><li>'ํด๋์ฉ ๋์งํธ ์์ฃผ์ธก์ ๊ธฐ ์ค์์ด๋ค'</li></ul> |
| 16.0 | <ul><li>'๋
ธ๋ฃจํ์ธํธ ์ํผ์๋๋ฉ ํ๋ฌ์ค 1L ๋ชฉ์ฌ์ฉ ์ฒ ์ฌ์ฉ ์๋๋ฉ ์ ์ฑ ํ์ธํธ ์ ๊ฐ์ ์ ๊ด ํน๋
น์_์ ๊ด(Gloss) ์ธ๊ณ์์ฌ'</li><li>'pieke ์ค๋ด ์ธํ
๋ฆฌ์ด์ฉ ํ๋ฆฌ๋ฏธ์ ์์ฑ ํ์ธํธ ํผํฌ ํ์ธํธ(1kg) abyss green (์ฃผ)ํด๊ด'</li><li>'[๋ชฌํ๋] ๋ชฌํ๋ ๊ณจ๋ ๊ทธ๋๋นํฐ ๋์ฉ๋ ๋ค์ฉ๋ ๋์์ฉ ์ํฌ๋ฆด ์บ์คํ๋ ์ด 400ml GOLD 5110_HIMALAYA ๋๋ฆผ๋ฐ์นผ์คํ ์ด'</li></ul> |
| 3.0 | <ul><li>'๋ฌด์ ์ธ์๋ฆฐ๋์ฅ๊ณ ํด๋ ์ถฉ์ ์ํ ์ฐจ๋ ์ฝํ ๋ณด๊ดํจ ๋จ์ผ ๋ฐฐํฐ๋ฆฌ(๋๊ธฐ์๊ฐ ์ฝ 10์๊ฐ) ์ ์ด์คํ ์ด'</li><li>'์ง๋ฅ์ค ์ ๋ฐ๋๋ผ์ด๋ฒ ์ธํธ Y์ ๋ณ๋์ฌ ์ก๊ฐ Y ์๊ณ ์๊ฒฝ ์ผ๊ฐ ์์ ์ญ์ 24in1 ์์ดํฐ ์ ์ฉ์๋ฆฌ๊ณต๊ตฌ 001339 ์ฃผ์ํ์ฌ ์๋น๋ค์ด์ด๋ฆฌ'</li><li>'๋์ํธ ํ๋ฆฌ๋ฏธ์ ์๊ณต๊ตฌ์ธํธ 184PCS ์ฝ์คํธ์ฝ ์์ผ ๋ผ์ณ ๊ณต๊ตฌ์ธํธ 108PCS ์ํฌ๋ฆฟ์นดํธ'</li></ul> |
| 7.0 | <ul><li>'๊ณ ์จ ๊ณ ์ ์คํ ์ธ์ฒ๊ธฐ ์ธ์ฐจ๊ธฐ ์ฒญ์๊ธฐ ์๋
์์ด์ปจ 2600W ํ ์
๊ทธ๋ ์ด๋ ์ง์ ๋ถ์ฌ ๋ชจ๋ธ ๊ฒ์ ๊ตฌ๋ฃจ๋ฏธ์ปดํผ๋'</li><li>'ํฌ๋ ํ ์ค ์ฐ์๊ฒ์ด์ง ๊ฐ์ค๋ ๊ทค๋ ์ดํฐ ๊ฐ์๊ธฐ ์กฐ์ ๊ธฐ EX-701 LPG-EX-705 ์ํต๊ณต๊ตฌ'</li><li>'ํฌ๋ ํ ์ค ์ฐ์๊ฒ์ด์ง ๊ฐ์ค๋ ๊ทค๋ ์ดํฐ ๊ฐ์๊ธฐ ์กฐ์ ๊ธฐ EX-701 CO2-EX-702B 220V-22 ์ํต๊ณต๊ตฌ'</li></ul> |
| 17.0 | <ul><li>'ํ์ธํธ๋ถ ๊ฑด์ถ๋์ฅ๊ธฐ๋ฅ์ฌ์ค๋น๋ฌผ ๋ฉ์๋ถ15mm ๋ง๋ถ ๋นฝ๋ถ ๋ฌธ์ ๋ํ ํ๋ถ ์ ์ฑ์์ฑ ๋ฏธ์ ๋ถ ํ์ธํธ ๋ถ ํญ70mm(11ํธ) ๋น์ผ์ด(B.K)์ฝ๋ฆฌ์'</li><li>'์ธํฐ๊ทธ๋ ์ธ ์ธํธ๋ผ๋ฐํฌ ํ๋ฒ์คํ
์ธ 10L ๋ฆฌ์น ์ด์ฝ๋ (์ฃผ)๋๋ฌด์์ฌ๋๋ค ๋์๋์๋ ํ์ธํธ'</li><li>'์์ฑ ๊ธ์ ๋ฐฉ์ฒญํ์ธํธ ์๊ฑดํ ์นํ๊ฒฝ ๋ฌด์ทจ ์คํ
์ธ๋ฆฌ์ค/์๋ฃจ๋ฏธ๋ํฉ๊ธ ์ปฌ๋ฌ ๋ธ๋ฃจx2_350g ์ฃผ์ํ์ฌ ํ๋ ์ด์ฐ'</li></ul> |
| 9.0 | <ul><li>'๋ฐฑ๋ง ์์๊ฐ์ A-3000(180mm) ๋์์์ฌ'</li><li>'ํ ์ข
๋ฒ ๋ฐ์ค ๊ฟ์ํ ๋ฒ๊ฟ ์ผ์๋ฒํต ๋ฐ๋ํ ์ฑ์ง 5๋จ ๊ฑด์กฐ 5๋จ ๊ฒฉ์ ์์ 1์ฐฝ ์์ค 5๋จ ๊ฒฉ์ ์์ 5์ฐฝ ๊ฐ๋์ฐ๊ธ๋ก๋ฒ'</li><li>'ํ์ข
๊ธฐ ๋ชจ์ข
๋ค๊ธฐ๋ฅ ์ฌ๊ธฐ ์๊ธ์น ๊ธฐ๊ณ ์ข
์ ์ฌ๋ ๋ฌด ์๋ฐฐ์ถ ํ์ข
6์ค(์ค๊ฐ๊ฒฉ 7.5~9) ์
๋ฌ๋ฆฌํฝ๋ชฐ'</li></ul> |
| 10.0 | <ul><li>'์งํฅ ์ฃผํ์ฉ ๊ฐ์ ์ฉ ๋์ ์ฐจ๋จ๊ธฐ 30A ์ํ ์ฐจ๋จ๊ธฐ๊ต์ฒด ์ฃผํ์ฉ(๊ฐ์ ์ฉ) ๋์ ์ฐจ๋จ๊ธฐ_01)2P 30A ์ํ ๊ทธ๋ฆฐ์ ๊ธฐ์กฐ๋ช
'</li><li>'๊ตญ์ฐ ํ์์ ๊ณต ์๋น์ 1์ ๋นจ๊ฐ 5m ๋์๋์๋น์ _6.0MM_10EA/SET ํผ์์คํด์ฆ'</li><li>'์บ ํ ๋ฆด์ 15m 4๊ตฌ ๋ฐ๋ฆฌํฐ๋ฆฌ ์บ ํ ์คํ ์บ ํ ๋๋์ ์ ๊ธฐ์ ์ฐ์ฅ์ 2๊ตฌ_์นดํค_15m ๋กฑ์บ '</li></ul> |
| 13.0 | <ul><li>'๋ค์ฐ์ค 707 ๋ฐ์ด์ค ๋ฐฉ์ ์ค๋ฆฌ์ฝ ํฌ๋ช
๋ฐ์ค 25๊ฐ์
์ฃผ๋ฐฉ ์ฑํฌ๋ ์์ค ์์กฐ ์ฝํน ๋ฐฉํ์ฉ_๋ค์ฐ 1199S ํ์ ๋ชจ๋ฉํธ๋ฆฌ (MOMENTREE)'</li><li>'๋ฒก์ค WD-40 ๋ฑ๊ฐ 360ml ์ ํ ์ค๋งํธ ์คํธ๋ก์ฐ SSํ WD40 ๋ฒก์ค WD-40 360ml(์ผ๋ฐํ) ๋ชจ๋ฉํธ๋ฆฌ (MOMENTREE)'</li><li>'์ฒ ๋ฌผํฉํ ๋ฆฌ ๋ฐฑ์ ์ค๋ ๋ฐฑ์๋ฉํธ ์์ค ํ์ฅ์ค ๋ณ๊ธฐ ๋ณด์ ๋ฉ์ง ํฌ๋ ์ ์ฐฉ์ Dํ ๊ณ ๋ฌดํค๋ผ ์ ์งํ๋์จ์ด'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.6114 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh2")
# Run inference
preds = model("ํ ๋ค์ด๋ ๋ค์ด์๋ชฌ๋ ์ต์ฐฉ์ฝ์ด๋นํธ ํด๋ฆฌ์ฑ ๋๋ฆฌ์ ์ฒ๊ณต TQ5 57_TTC 17 ์ฃผ์ํ์ฌ ํฌ์์ค")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 10.7474 | 27 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 50 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
| 10.0 | 50 |
| 11.0 | 50 |
| 12.0 | 50 |
| 13.0 | 50 |
| 14.0 | 50 |
| 15.0 | 50 |
| 16.0 | 50 |
| 17.0 | 50 |
| 18.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0067 | 1 | 0.3954 | - |
| 0.3356 | 50 | 0.3839 | - |
| 0.6711 | 100 | 0.2913 | - |
| 1.0067 | 150 | 0.2101 | - |
| 1.3423 | 200 | 0.1066 | - |
| 1.6779 | 250 | 0.0475 | - |
| 2.0134 | 300 | 0.0342 | - |
| 2.3490 | 350 | 0.0274 | - |
| 2.6846 | 400 | 0.028 | - |
| 3.0201 | 450 | 0.029 | - |
| 3.3557 | 500 | 0.0291 | - |
| 3.6913 | 550 | 0.0258 | - |
| 4.0268 | 600 | 0.0202 | - |
| 4.3624 | 650 | 0.0085 | - |
| 4.6980 | 700 | 0.0124 | - |
| 5.0336 | 750 | 0.0039 | - |
| 5.3691 | 800 | 0.0089 | - |
| 5.7047 | 850 | 0.0063 | - |
| 6.0403 | 900 | 0.0034 | - |
| 6.3758 | 950 | 0.0046 | - |
| 6.7114 | 1000 | 0.008 | - |
| 7.0470 | 1050 | 0.0048 | - |
| 7.3826 | 1100 | 0.0028 | - |
| 7.7181 | 1150 | 0.0042 | - |
| 8.0537 | 1200 | 0.0019 | - |
| 8.3893 | 1250 | 0.0008 | - |
| 8.7248 | 1300 | 0.0004 | - |
| 9.0604 | 1350 | 0.0003 | - |
| 9.3960 | 1400 | 0.0003 | - |
| 9.7315 | 1450 | 0.0002 | - |
| 10.0671 | 1500 | 0.0003 | - |
| 10.4027 | 1550 | 0.0002 | - |
| 10.7383 | 1600 | 0.0001 | - |
| 11.0738 | 1650 | 0.0002 | - |
| 11.4094 | 1700 | 0.0001 | - |
| 11.7450 | 1750 | 0.0001 | - |
| 12.0805 | 1800 | 0.0001 | - |
| 12.4161 | 1850 | 0.0001 | - |
| 12.7517 | 1900 | 0.0001 | - |
| 13.0872 | 1950 | 0.0001 | - |
| 13.4228 | 2000 | 0.0001 | - |
| 13.7584 | 2050 | 0.0001 | - |
| 14.0940 | 2100 | 0.0001 | - |
| 14.4295 | 2150 | 0.0001 | - |
| 14.7651 | 2200 | 0.0001 | - |
| 15.1007 | 2250 | 0.0001 | - |
| 15.4362 | 2300 | 0.0001 | - |
| 15.7718 | 2350 | 0.0001 | - |
| 16.1074 | 2400 | 0.0001 | - |
| 16.4430 | 2450 | 0.0001 | - |
| 16.7785 | 2500 | 0.0001 | - |
| 17.1141 | 2550 | 0.0001 | - |
| 17.4497 | 2600 | 0.0001 | - |
| 17.7852 | 2650 | 0.0001 | - |
| 18.1208 | 2700 | 0.0001 | - |
| 18.4564 | 2750 | 0.0001 | - |
| 18.7919 | 2800 | 0.0001 | - |
| 19.1275 | 2850 | 0.0001 | - |
| 19.4631 | 2900 | 0.0001 | - |
| 19.7987 | 2950 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
steph0713/deepffnllama-768_12_4-1
|
steph0713
| 2024-11-20T14:46:23Z | 49 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepffn-llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T14:34:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tanbinh2210/mlm_trained_phobert
|
tanbinh2210
| 2024-11-20T14:34:47Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-20T14:33:18Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("tanbinh2210/mlm_trained_phobert")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.12.6
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
noaebbot/llama3.1-8B-insights
|
noaebbot
| 2024-11-20T14:33:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-31T16:28:40Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** noaebbot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yakazimir/qwen_uncCPO_entropy_0_01
|
yakazimir
| 2024-11-20T14:33:18Z | 146 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"simpo",
"generated_from_trainer",
"conversational",
"dataset:yakazimir/ultrafeedback_binarized",
"base_model:trl-lib/qwen1.5-0.5b-sft",
"base_model:finetune:trl-lib/qwen1.5-0.5b-sft",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-19T08:19:51Z |
---
library_name: transformers
license: other
base_model: trl-lib/qwen1.5-0.5b-sft
tags:
- alignment-handbook
- trl
- simpo
- generated_from_trainer
- trl
- simpo
- generated_from_trainer
datasets:
- yakazimir/ultrafeedback_binarized
model-index:
- name: qwen_uncCPO_entropy_0_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen_uncCPO_entropy_0_01
This model is a fine-tuned version of [trl-lib/qwen1.5-0.5b-sft](https://huggingface.co/trl-lib/qwen1.5-0.5b-sft) on the yakazimir/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0500
- Sft Loss: 3.9220
- Rewards/chosen: -4.3252
- Rewards/rejected: -5.1044
- Rewards/accuracies: 0.6892
- Rewards/margins: 0.7793
- Logps/rejected: -5.1044
- Logps/chosen: -4.3252
- Logits/rejected: 0.1444
- Logits/chosen: 0.0509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sft Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0563 | 0.2141 | 400 | 0.0573 | 4.8352 | -5.7454 | -6.0246 | 0.5445 | 0.2792 | -6.0246 | -5.7454 | 0.6512 | 0.5372 |
| 0.0533 | 0.4282 | 800 | 0.0524 | 4.2340 | -4.6954 | -5.0777 | 0.6157 | 0.3823 | -5.0777 | -4.6954 | 0.2939 | 0.1644 |
| 0.0533 | 0.6422 | 1200 | 0.0518 | 4.1504 | -4.5198 | -5.0186 | 0.6484 | 0.4989 | -5.0186 | -4.5198 | 0.4014 | 0.2684 |
| 0.0508 | 0.8563 | 1600 | 0.0512 | 4.0690 | -4.5220 | -5.0081 | 0.6491 | 0.4862 | -5.0081 | -4.5220 | 0.2498 | 0.1344 |
| 0.0529 | 1.0704 | 2000 | 0.0508 | 3.9195 | -4.3917 | -4.9646 | 0.6521 | 0.5729 | -4.9646 | -4.3917 | 0.3268 | 0.2181 |
| 0.0522 | 1.2845 | 2400 | 0.0504 | 4.1797 | -4.6133 | -5.2771 | 0.6647 | 0.6638 | -5.2771 | -4.6133 | 0.2727 | 0.1622 |
| 0.0515 | 1.4986 | 2800 | 0.0504 | 4.0933 | -4.4442 | -5.0786 | 0.6825 | 0.6344 | -5.0786 | -4.4442 | 0.2050 | 0.0984 |
| 0.0526 | 1.7127 | 3200 | 0.0503 | 4.0886 | -4.4943 | -5.1537 | 0.6751 | 0.6594 | -5.1537 | -4.4943 | 0.2002 | 0.0920 |
| 0.0533 | 1.9267 | 3600 | 0.0501 | 3.9857 | -4.3809 | -5.1003 | 0.6825 | 0.7195 | -5.1003 | -4.3809 | 0.1348 | 0.0421 |
| 0.0493 | 2.1408 | 4000 | 0.0500 | 3.9751 | -4.3954 | -5.1537 | 0.6840 | 0.7583 | -5.1537 | -4.3954 | 0.3029 | 0.1980 |
| 0.0522 | 2.3549 | 4400 | 0.0500 | 3.9820 | -4.4013 | -5.1632 | 0.6869 | 0.7619 | -5.1632 | -4.4013 | 0.2139 | 0.1131 |
| 0.0513 | 2.5690 | 4800 | 0.0500 | 3.9732 | -4.3709 | -5.1160 | 0.6944 | 0.7451 | -5.1160 | -4.3709 | 0.1787 | 0.0785 |
| 0.0498 | 2.7831 | 5200 | 0.0500 | 3.9372 | -4.3318 | -5.0969 | 0.6892 | 0.7651 | -5.0969 | -4.3318 | 0.2138 | 0.1134 |
| 0.0496 | 2.9972 | 5600 | 0.0500 | 3.9220 | -4.3252 | -5.1044 | 0.6892 | 0.7793 | -5.1044 | -4.3252 | 0.1444 | 0.0509 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
mini1013/master_cate_lh0
|
mini1013
| 2024-11-20T14:24:56Z | 207 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] |
text-classification
| 2024-11-20T09:35:11Z |
---
base_model: mini1013/master_domain
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ์์๋ฅด ๋ง์คํฌ ํผํฌV๋ผ์ดํธํ 10๋งค KF94๋ง์คํฌ์๋ถ๋ฆฌํ ์ฌ๋ฆ์ฉ ์กฐ์ธ์ฑ ๋ํ ๋ธ๋ 50๋งค ๋ณต๋ฉ์ด๊ฐ๊ฒ
- text: ์ ์๋ด๋ฐฐ ๋ฌดํ๊ธฐ ํํธํก 3 ๊ฐ/๊ฐ cudo ONID ๋ฏธ๋ ํฌ๋ ์นดํธ๋ฆฌ์ง 1.0ohm ์ ํญ recoment Vape ํ 01 3pcs
one pack ์ฌ๋ฐ์ด๋ฌด๋
- text: ๋ ์คํ REX POD ๋ฆด๋ ์ค ์ ์๋ด๋ฐฐ ํ RELX ํธํ ํฌ๋ ๋ฒ ์ดํ์ฝ๋
- text: ์์ผ๋ฆฌ ๋ฐฐ๋ํ
์คํธ๊ธฐ 30๊ฐ์
+์ํ
๊ธฐ 3๊ฐ์
๋ฐฐํ
๊ธฐ ๋ฐฐ๋์ผ ๋ฐฐ๋๊ธฐ [์์ ํ
์คํธ๊ธฐ]_ํด๋ฆฌ์ด ์ผ๋ฆฌ ํจ์คํธ X 3๊ฐ ๋ดํธ๋ฆฌํฌ์ค์ผ์ด ์ฃผ์ํ์ฌ
- text: ๋ถํธ ๋ธ์ด๋ฉ์ดํธ๋งฅ์ค ์ก์์
ํธํก์
๋ฌธ์ ์๋ด๋ฐฐ ์ค๋์ค๋ธ๋ ํ ์ด๋ฒ ์ดํ
inference: true
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9110184776944967
name: Metric
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 17 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 4.0 | <ul><li>'์ฒ๋
๊ฑฐ๋ถ์จ๊ตฌ๊ธฐ(์ ํ์)+์ง์๊ฐ์๋ช
์ฅ๋ด80+ํฐ๋ณด๋ผ์ดํฐ ์ธ์ง๋ฉ๋์นผ์ฝํ(์ฃผ)'</li><li>'๊ฐ์ ์ฉ ์์ฅ๋ด ๋ง๋ฆฐ์ฅ ์ฝ์ ์ฅ๊ฐ๋ฃจ ์ฃผ์ ๋ถ๋ฆฌ ๊น์ง์'</li><li>'[NF81261]์๋ด- ๊ธฐํฉ ์6ํธ ํ๊ตญ๋ฉ๋์นผ'</li></ul> |
| 9.0 | <ul><li>'๊ณ ๋ ค์์ง์นจ ์์์ถํ์นจ๊ด ์ฌํ๊ธฐ ์ธํธ ์นจ100๊ฐ ์์ฝ100๋งค ํ๋ฐฉ์ข
ํฉ'</li><li>'KM์นจ 1ํต(100์ 1000๊ฐ) ์ผํ์ฉ์นจ ์์ฝ์ 20๋งค ๋ฉธ๊ท ์คํ๋ง์นจ ์๋
0.40x60 ์ฃผ์ํ์ฌ ์์ด์น์ด๋ฐ์ด์ค'</li><li>'TG ์์๋ด1ํธ/6ํธ ๊ธ์ 5๋ฐ์ค 04. 6ํธ ๊ธ์ 5๋ฐ์ค '</li></ul> |
| 14.0 | <ul><li>'์ฌ๋ฐ์ค ์ค์ผ ์น ๋๋ฐ 12ml ๋์ฉ๋ 3. ์ฌ๋ฐ์ค์ค์ผ ์ค๋ฆฌ์ง๋ 30ml ๋ฐ์ดํธ๋์ค'</li><li>'๊ฟ์ ๊ฟ์ ์
๋ฒ๋ฆผ ๋ฐฉ์ง ๋ง์ฐ์ค ๋ฐด๋(25๋งค)-3๊ฐ ๋ฉ๋ํฉํ ๋ฆฌ'</li><li>'ํ์กฐ ์ ์นผ๋ฆฝํฌ์ค ๋ฏผํธ๋ฐค 25g ๋จํ ๊ตฌ๋งค_์ ์นผ๋ฆฝํฌ์ค ๋ฏผํธ๋ฐค 30g (2๊ฐ์ด์ ๋ฌด๋ฃ๋ฐฐ์ก) ํ์กฐ'</li></ul> |
| 12.0 | <ul><li>'ํฌ๋ฒ ์ดํ ์ ค๋ก ๋งฅ์ค ํ ๊ณตํ 0.6์ด ๋ฉ์ฌํ(๋ฑ๊ฐ1๊ฐ) ์ ์๋ด๋ฐฐ ์ ๋ด ์ฝ์ผ ์นดํธ๋ฆฌ์ง ๋์ด๋ฒ ์ดํ'</li><li>'๋ชฌ์คํฐํ์น ์ ์๋ด๋ฐฐ์ก์ ์ ๋ด์ก์ ์์ฑํ ์๋ก์๋ฒ ๋ผ ๋ฌด๋์ฝํด ์๋ฆฌ์ด์ปดํผ๋'</li><li>'๊ตญ๋ด๋ฐฐ์ก ์์ด์ฝ์ค3 ์บก, ํ๋ (์์ด์ฝ์ค ๋์ค ํธํ) ๋ฉ๊ฐํธ๋๋ํจ์
'</li></ul> |
| 5.0 | <ul><li>'๋ง์คํด๋ฆฝ ๋ง์คํฌ ์์ชฝ๊น์ง ๋ณดํธํ๋ ๋ง์คํฌ์คํธ๋ฉ ํ์ดํธ๊ทธ๋ ์ด ์ฃผ์ํ์ฌ ์์ด๋ฆฌ์ค'</li><li>'์์๋ฅด ๋ฅ๋ฐด๋ 10์ข
(ํ1) ์ค๋ฆฌ์ฝ ๋ฅ์คํธ๋ฉ ๋ง์คํฌ์ค ๊ตญ์ฐ CS-NB_PIN ๋ฅ ๋ฐด๋ (ํํฌ) 1๊ฐ ํด๋ฐ์ปดํผ๋ ์ฃผ์ํ์ฌ'</li><li>'์นด๋ก ์จ ๊ณ ๊ธ ๋ง์คํฌ ์คํธ๋ฉ ๋ชฉ๊ฑธ์ด 03.๋ฐ๋ฆฌํฐ๋ฆฌ์นดํค ํ์
๋ง(PINGSELLING)'</li></ul> |
| 8.0 | <ul><li>'ํด์กฑ์๊ฐ ์ ํ 6๋งค x 3ํฉ ๋ฐํจ์น ๋ค๋ฆฌ ์ข
์๋ฆฌ ์ฟจ๋ง์ํธ ์ผ๋ณธํํธ ๊ฑด๊ฐ์คํ ์ด'</li><li>'ํด์กฑ์๊ฐ ์ฟจ๋ง์ํธ 6๋งค์
x 5๊ฐ (์ฃผ)์จ์ ์ด์ด์์ '</li><li>'ํด์กฑ์๊ฐ ์ฟจ๋ง์ํธ 6๋งค์
x 5๊ฐ / ๋ค๋ฆฌ๊ด๋ฆฌ ํจ์น ์ฟจ๋งํจ์น 7.์จ๊ฐํ
๋ผํผ ์จ์ด ๋ฐ๋ฐ๋ฅ๋ฐด๋ 4๋งค์
x2๊ฐ jullia2222'</li></ul> |
| 16.0 | <ul><li>'๋ฒ ๋ฆฌ๋ฉ๋ ํ๋ผํ ์์ค ๋ ๋ชฌ 453g, 12๊ฐ ์์ดํธ๋ ์ด๋'</li><li>'์์ฑ์๋จธ๊ธฐ ์ ๋ชจ์ฉ ๊ณ ๊ธ ์
์์ฉ ์์ฑ์๋จธ ์
ํ ์์ค ์ค๋ ์ง ํธ๋ ์์ค 1์กฐ๊ฐ (450g) ํฌ๋ก๊ตฌ์ฏ'</li><li>'ํ๋ผํ ๋ณด์จ ์ฅ๊ฐ ์ฐ์ง ๋ณด์ต ๋ณดํธ ๋ฎ๊ฐ ๊ฐ์ ์ฉ ์ฅ๊ฐ 052640DC ๋์์ปดํผ๋'</li></ul> |
| 6.0 | <ul><li>'์ฐ๋ง์คํฌ KFAD ๋ํ 50์
ํ์ดํธ [๋ด์ถ๋ดํค์ค ์ฐ๋ง์คํฌ] KFAD๋ํํ์ดํธ50์
๊ฐ๋ณํฌ์ฅ ์ฃผ์ํ์ฌ ์น์ด์ฆ'</li><li>'์์ฝ์ฒ ์์ฝ์ธํ ์์ ์ฉ ์์ฑ ๋ดํ๋ง์คํฌ ๋ฐฑ์ 50๋งค ์ ์์ค์ต'</li><li>'์์ด๋ฐ๋๋ฆฌ KF94 ๋ง์คํฌ ์๋ถ๋ฆฌํ๋ง์คํฌ ์ปฌ๋ฌ 10๋งค MS_์ฝํผ์์ด๋ณด๋ฆฌ ์ฃผ์ํ์ฌ ๋ผ์จ๋ธ๋ผ๋์ค'</li></ul> |
| 13.0 | <ul><li>'์๊ฐ๋ฝ๋ง์ฌ์ง๊ธฐ ์๊ฐ๋ฝ์ง์๊ธฐ ์ง์๊ตฌ์ฌ ์๊ฐ๋ฝ๋กค๋ฌ ๋ก์ฆ์ํ๋ฃจ'</li><li>'์๊ฐ๋ฝ ์ง์๊ธฐ ํ์๋ฆฌ ํฉ๊ณกํ ์ง์๋ด ๋ธ๋ 365๋๋1'</li><li>'์ ๊ฐ๋ฝ ํธ๋ ๋ง์ฌ์ง ์ง์ ๋ง์ฌ์ง ์๋ง ๊ธฐ ์๊ฐ๋ฝ ๋ง์ฌ์ง ๋กค๋ฌ ์์ค๋๋ธ์ ์ปดํผ๋'</li></ul> |
| 3.0 | <ul><li>'๋์ฅ๊ฒ์ฌํคํธ ๋ถ๋ณ์ ํ๊ฒ์ฌ ์ด์ง๋ํ
ํธ ์ฉ์ข
์ ์ข
3๋งค ๋์ฅ๊ฒ์ฌํคํธ X 4๋งค (์ฃผ)๋์ง์ธํ'</li><li>'๋ฉ๋์ ์
ํ ์ด์ง๋ํ
ํธ ๋์ฅ๊ฒ์ฌ์ง/๊ฐํธ2๋ถ/์ด๊ธฐ ๋์ฅ์ ์๊ฐ์ง๋จ/์ฉ์ข
๊ฒ์ฌ ์ฃผ์ํ์ฌ ์์ฐ'</li><li>'์ฒญํ88 ์๋์ํ๋ฒฝ์ง๋๊ธฐ ๋ชฉ ๊ฐ๋ ์ ๊ฑฐ๊ธฐ ๋ฐฐ์ถ๊ธฐ ๊ฐ๋ด ์ ๋๊ธฐ ํธํก ์ฌํ ํํ๋ ์ด๋๊ธฐ 1_์ฐ๋์(๋ถํฌ๋ช
) ์ผ์ด๋น์คํ ์ด'</li></ul> |
| 1.0 | <ul><li>'ํ์ด๋ฒ R300B ์คํฌ์ธ ํ์ฐ 3์คํ์
ํธ๋๋ฉ์ด๋ ์ผ๊ตฌ ์ฉํ ๋จ์ ์ฌ์ R300B-08_XL ๋ฒ๋ค๋ฒ๋คRYU'</li><li>'๊ฒ๋ฅด๋ง๋ ์คํฌ์ธ ํ์ฐ ์์ด์จ ๋ฐธ๋ฐ์ค ๋ฐด๋ ๊ฒ์์ S์ฌ์ด์ฆ ๋ฅด๋ณด'</li><li>'๊ฑด๊ฐํ์ฐ ๊ฒ๋ฅด๋ง๋ํ์ฐ ์๋ ํ๋ณต ํผ๋ก 99 ๋จ์ฑ์ฉ ๋ก์ฆ ๊ณจ๋ (์ฌ ๋ธ๋ ๊ฒ๋ฅด๋ง๋) ๋์์ ๋ฌผ'</li></ul> |
| 10.0 | <ul><li>'๋ง์ค์ผ์ด ๋ฉํ ํฌ๋ช
์์ ๋ง์คํฌ ์๋น ์ฃผ๋ฐฉ ์ฃผ๋ฐฉ์ฉ ์์์ ๋ฉํ_1๊ฐ์
(์ฃผ)์ฝ์ค๋ชจ์ค์จ์ด'</li><li>'๋ง์ค์ผ์ด ๋ฉํ ํฌ๋ช
์์ ๋ง์คํฌ ์๋น ์ฃผ๋ฐฉ ์ฃผ๋ฐฉ์ฉ ์์์ ์์ฝ_5๊ฐ์
(์ฃผ)์ฝ์ค๋ชจ์ค์จ์ด'</li><li>'๋ง์ค์ผ์ด ๋ฉํ ํฌ๋ช
์์ ๋ง์คํฌ ์๋น ์ฃผ๋ฐฉ ์ฃผ๋ฐฉ์ฉ ์์์ ๋ฉํ_5๊ฐ์
(์ฃผ)์ฝ์ค๋ชจ์ค์จ์ด'</li></ul> |
| 2.0 | <ul><li>'์๋ก๋ง ๊ธ์ฐ ํ์ดํ ๋์ฝํด ์๋ ๋นํ ์คํฑ ํผ์ฐ๋ ๋นํ๋ฏผ ๋ด๋ฐฐ ๋์ฒด ๋ณต๋ง๋๋ท์ปด'</li><li>'๊ธ์ฐ๋ณด์กฐ์ ๊ธ์ฐํ์ดํ ์ฝ๊ตญ ๋์ฝํด ํจ์น ๋ฐฐ์ถ ๊ธ์ฐ์ด ์ฝ ์๋งจ'</li><li>'๋ด๋ฐฐ๋์ ํผ์ฐ๋ ์ฝ๊ตญ ๋นํ๋ฏผ ๋ด๋ฐฐ ํ์ดํ ์๊ทธ์ผ๋ผ'</li></ul> |
| 0.0 | <ul><li>'ํธ๋ฌด๋ก ๊ฒจ์ธ์ฉ ๋จผ์ง์๋ ๊ณ ๊ธ ์ฐจ๋ ต์ด๋ถ ๋จํ SS Q K SK(๋น์ผ์ถ๊ณ ) ํฌ๋ฆผ_์ํผํน SK ๋ฒํผ์นดํ์'</li><li>'ํ๋ํธ ๋ณด๊ด ํจ๋ ์ฝ๋ชฉ๊ฑธ์ด ๋ชฉ๊ฑธ์ด BW5BA3C5 ๊ณจ๋_ํ์ฌ์ฆN ํ๋์คํ ์ด16'</li><li>'ํ ๋ค์ด๋๋ชฉ๊ฑธ์ด ๋ง๋ผํค ex์์ด์ด ๊ฑด๊ฐ ๋ฌ๋ ์คํฌ์ธ ์ผ๊ตฌjkl493 EX์์ด์ดํ ๋ค์ด๋๋ชฉ๊ฑธ์ดํํฌ-ํ์ดํธ43cm ์ตํ๋ฆฌG'</li></ul> |
| 15.0 | <ul><li>'NS ๋ฐํฌ์ธ์ฒ์ 50์ ์ธ์ฒ์ ์ฝ์ธ์ฒ๊ธฐ ํ
๋ธ๋ฌ ๋ฌผ๋ณ ํ๋ ์์๊ธฐ ์กฐ๋ฆฌ๊ธฐ ์ด๊ท ์ธ์ ์ NS์ธ์ฒ์ (50์ ) ๋ฉ๋ํ์ธ'</li><li>'๋
ธ์ฆ์ค์ํผ ์ฝ์ธ์ฒ๊ธฐ(๋ถ๋ง10ํฌ)+๋ถ๋ง100ํฌset/์ฝ์ธ์ MinSellAmount ๋ฉ๋ํ์ธ'</li><li>'๋๊ฐ์ต๊ธฐ ํ์ ๊ธฐ ๋น๊ฐ ์ฝ๊ฑด์กฐ ์น๋ฃ ์ฝง๊ตฌ๋ฉ ์คํ๊ธฐ G. 50ml ์ฉ๋ + ๋๋ผ์ด ์์ดํฉ 5ํฉ ํ๋'</li></ul> |
| 11.0 | <ul><li>'์ก์ ์ ์๋ด๋ฐฐ ์ค๋ฆฌ์ง๋ Yocan Flat Plus ๋ฐฐํฐ๋ฆฌ ๋ฒ ์ดํ ํ 900mAh 510 ์ค๋ ๋ 10s ์์ด Cํ์
08ohm ์นดํธ๋ฆฌ์ง ์ ์ ๋ด๋ฐฐ Black_3pcs ํฑ์ผ'</li><li>'์ ์๋ด๋ฐฐ ๋ฌดํ๊ธฐ ํํธํก ๊ต์ฒด ์ํฌ๋ฆด ์ฒ๋ฅ ํจ๋ ์ปค๋ฒ Cthulhu AIO ๋ฐ์ค ๋ชจ๋ ์ก์ธ์๋ฆฌ 01 WHITE ํน๋ณํํ๋ฃจ์ง๊ตฌ'</li><li>'์์ ๋ฉ๋ก๋ง์ค ์
ํธํก ์ ์๋ด๋ฐฐ ์ก์ ๋ฌ๋ธํผ์น ํ๋น์ค'</li></ul> |
| 7.0 | <ul><li>'์์ค์ด์ด ์์๋ฐฉ์ง ์ด์ดํ๋ฌ๊ทธ ์๋ฉด๊ท๋ง๊ฐ ์ธต๊ฐ ์ฐจ๋จ ์์ NS4000 FI3000 TI5000 [์ ํ2]NS4000_์ค๋ ์ง ์์ค์ด์ด๋ณธ์ฌ'</li><li>'์ค๋ฆฌ์ฝ๊ท๋ฎ๊ฐ ์ผ์์ฉ๊ท๋ณดํธ์ปค๋ฒ ํ๋ง์ฉ๊ท๋ง๊ฐ ๋ฆฌํ
์ผํํฌ'</li><li>'๋งฅ์ค ์์๋ฐฉ์ง ๊ท๋ง๊ฐ ๋์ฉ๋ ์ํํธํผ ๋๋ฆผ๊ฑธ 100๊ฐ์
์ฝ ๊ณจ์ด ์์์ฐจ๋จ ์๋ฉด ์ด์ดํ๋ฌ๊ทธ ์ธํธ๋ผ 100๊ฐ์
(์ฃผ)์๋ฒ์
'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9110 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh0")
# Run inference
preds = model("๋ถํธ ๋ธ์ด๋ฉ์ดํธ๋งฅ์ค ์ก์์
ํธํก์
๋ฌธ์ ์๋ด๋ฐฐ ์ค๋์ค๋ธ๋ ํ ์ด๋ฒ ์ดํ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 10.4659 | 31 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 50 |
| 1.0 | 50 |
| 2.0 | 25 |
| 3.0 | 50 |
| 4.0 | 50 |
| 5.0 | 50 |
| 6.0 | 50 |
| 7.0 | 50 |
| 8.0 | 50 |
| 9.0 | 50 |
| 10.0 | 28 |
| 11.0 | 50 |
| 12.0 | 24 |
| 13.0 | 50 |
| 14.0 | 50 |
| 15.0 | 50 |
| 16.0 | 50 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0082 | 1 | 0.4305 | - |
| 0.4098 | 50 | 0.347 | - |
| 0.8197 | 100 | 0.1694 | - |
| 1.2295 | 150 | 0.0708 | - |
| 1.6393 | 200 | 0.0363 | - |
| 2.0492 | 250 | 0.0314 | - |
| 2.4590 | 300 | 0.0411 | - |
| 2.8689 | 350 | 0.0414 | - |
| 3.2787 | 400 | 0.0175 | - |
| 3.6885 | 450 | 0.0267 | - |
| 4.0984 | 500 | 0.0184 | - |
| 4.5082 | 550 | 0.0085 | - |
| 4.9180 | 600 | 0.0185 | - |
| 5.3279 | 650 | 0.0094 | - |
| 5.7377 | 700 | 0.0022 | - |
| 6.1475 | 750 | 0.0078 | - |
| 6.5574 | 800 | 0.0104 | - |
| 6.9672 | 850 | 0.004 | - |
| 7.3770 | 900 | 0.0081 | - |
| 7.7869 | 950 | 0.0058 | - |
| 8.1967 | 1000 | 0.0045 | - |
| 8.6066 | 1050 | 0.0021 | - |
| 9.0164 | 1100 | 0.0079 | - |
| 9.4262 | 1150 | 0.0021 | - |
| 9.8361 | 1200 | 0.0002 | - |
| 10.2459 | 1250 | 0.0001 | - |
| 10.6557 | 1300 | 0.0001 | - |
| 11.0656 | 1350 | 0.0001 | - |
| 11.4754 | 1400 | 0.002 | - |
| 11.8852 | 1450 | 0.0002 | - |
| 12.2951 | 1500 | 0.0039 | - |
| 12.7049 | 1550 | 0.0001 | - |
| 13.1148 | 1600 | 0.0001 | - |
| 13.5246 | 1650 | 0.002 | - |
| 13.9344 | 1700 | 0.0005 | - |
| 14.3443 | 1750 | 0.0002 | - |
| 14.7541 | 1800 | 0.0001 | - |
| 15.1639 | 1850 | 0.0001 | - |
| 15.5738 | 1900 | 0.0001 | - |
| 15.9836 | 1950 | 0.0001 | - |
| 16.3934 | 2000 | 0.0001 | - |
| 16.8033 | 2050 | 0.0001 | - |
| 17.2131 | 2100 | 0.0001 | - |
| 17.6230 | 2150 | 0.0001 | - |
| 18.0328 | 2200 | 0.0001 | - |
| 18.4426 | 2250 | 0.0001 | - |
| 18.8525 | 2300 | 0.0001 | - |
| 19.2623 | 2350 | 0.0 | - |
| 19.6721 | 2400 | 0.0001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
botways/mistral_finetune
|
botways
| 2024-11-20T14:22:31Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-20T14:13:28Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cristianmanosalva/NER-finetuning-XMLR-CM-V1
|
cristianmanosalva
| 2024-11-20T14:22:16Z | 122 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:biobert_json",
"base_model:raulgdp/xml-roberta-large-finetuned-ner",
"base_model:finetune:raulgdp/xml-roberta-large-finetuned-ner",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-20T14:20:22Z |
---
library_name: transformers
base_model: raulgdp/xml-roberta-large-finetuned-ner
tags:
- generated_from_trainer
datasets:
- biobert_json
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-finetuning-XMLR-CM-V1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: biobert_json
type: biobert_json
config: Biobert_json
split: validation
args: Biobert_json
metrics:
- name: Precision
type: precision
value: 0.9336523819882532
- name: Recall
type: recall
value: 0.9595349877040018
- name: F1
type: f1
value: 0.9464167585446528
- name: Accuracy
type: accuracy
value: 0.9819591471596839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetuning-XMLR-CM-V1
This model is a fine-tuned version of [raulgdp/xml-roberta-large-finetuned-ner](https://huggingface.co/raulgdp/xml-roberta-large-finetuned-ner) on the biobert_json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0849
- Precision: 0.9337
- Recall: 0.9595
- F1: 0.9464
- Accuracy: 0.9820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2697 | 1.0 | 612 | 0.0995 | 0.9022 | 0.9392 | 0.9203 | 0.9726 |
| 0.0954 | 2.0 | 1224 | 0.0909 | 0.9171 | 0.9586 | 0.9374 | 0.9778 |
| 0.0661 | 3.0 | 1836 | 0.0789 | 0.9337 | 0.9581 | 0.9457 | 0.9816 |
| 0.0533 | 4.0 | 2448 | 0.0853 | 0.9317 | 0.9594 | 0.9454 | 0.9811 |
| 0.035 | 5.0 | 3060 | 0.0849 | 0.9337 | 0.9595 | 0.9464 | 0.9820 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
yakazimir/qwen_qfUNL_entropy_0_01
|
yakazimir
| 2024-11-20T14:21:03Z | 149 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"simpo",
"generated_from_trainer",
"conversational",
"dataset:yakazimir/ultrafeedback_binarized",
"base_model:trl-lib/qwen1.5-0.5b-sft",
"base_model:finetune:trl-lib/qwen1.5-0.5b-sft",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-19T08:18:30Z |
---
library_name: transformers
license: other
base_model: trl-lib/qwen1.5-0.5b-sft
tags:
- alignment-handbook
- trl
- simpo
- generated_from_trainer
- trl
- simpo
- generated_from_trainer
datasets:
- yakazimir/ultrafeedback_binarized
model-index:
- name: qwen_qfUNL_entropy_0_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen_qfUNL_entropy_0_01
This model is a fine-tuned version of [trl-lib/qwen1.5-0.5b-sft](https://huggingface.co/trl-lib/qwen1.5-0.5b-sft) on the yakazimir/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6685
- Sft Loss: 1.5897
- Rewards/chosen: -1.6017
- Rewards/rejected: -2.2330
- Rewards/accuracies: 0.6506
- Rewards/margins: 0.6314
- Logps/rejected: -2.2330
- Logps/chosen: -1.6017
- Logits/rejected: 0.2142
- Logits/chosen: 0.1178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sft Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6889 | 0.2141 | 400 | 0.7003 | 1.4382 | -1.5229 | -1.6955 | 0.5579 | 0.1726 | -1.6955 | -1.5229 | 0.2817 | 0.1945 |
| 0.6916 | 0.4282 | 800 | 0.6822 | 1.5282 | -1.5414 | -1.8469 | 0.6076 | 0.3055 | -1.8469 | -1.5414 | 0.2875 | 0.2001 |
| 0.6757 | 0.6422 | 1200 | 0.6771 | 1.5574 | -1.5600 | -1.9539 | 0.6217 | 0.3939 | -1.9539 | -1.5600 | 0.2922 | 0.2043 |
| 0.6744 | 0.8563 | 1600 | 0.6739 | 1.5959 | -1.6093 | -2.0408 | 0.6335 | 0.4315 | -2.0408 | -1.6093 | 0.2827 | 0.1913 |
| 0.714 | 1.0704 | 2000 | 0.6719 | 1.5564 | -1.5625 | -2.0466 | 0.6269 | 0.4841 | -2.0466 | -1.5625 | 0.1990 | 0.1104 |
| 0.6715 | 1.2845 | 2400 | 0.6719 | 1.5799 | -1.5845 | -2.1083 | 0.6380 | 0.5238 | -2.1083 | -1.5845 | 0.2487 | 0.1536 |
| 0.6658 | 1.4986 | 2800 | 0.6707 | 1.6055 | -1.6197 | -2.1818 | 0.6454 | 0.5621 | -2.1818 | -1.6197 | 0.1108 | 0.0257 |
| 0.6709 | 1.7127 | 3200 | 0.6701 | 1.5845 | -1.5941 | -2.1721 | 0.6476 | 0.5780 | -2.1721 | -1.5941 | 0.1373 | 0.0502 |
| 0.659 | 1.9267 | 3600 | 0.6686 | 1.5568 | -1.5549 | -2.1383 | 0.6454 | 0.5835 | -2.1383 | -1.5549 | 0.1189 | 0.0332 |
| 0.6241 | 2.1408 | 4000 | 0.6689 | 1.5859 | -1.5837 | -2.1770 | 0.6454 | 0.5933 | -2.1770 | -1.5837 | 0.1840 | 0.0917 |
| 0.6443 | 2.3549 | 4400 | 0.6692 | 1.5919 | -1.6001 | -2.2168 | 0.6461 | 0.6166 | -2.2168 | -1.6001 | 0.0426 | -0.0398 |
| 0.6356 | 2.5690 | 4800 | 0.6686 | 1.5864 | -1.5964 | -2.2216 | 0.6484 | 0.6252 | -2.2216 | -1.5964 | 0.1106 | 0.0226 |
| 0.6448 | 2.7831 | 5200 | 0.6683 | 1.5882 | -1.5994 | -2.2308 | 0.6506 | 0.6314 | -2.2308 | -1.5994 | 0.0974 | 0.0105 |
| 0.6368 | 2.9972 | 5600 | 0.6685 | 1.5897 | -1.6017 | -2.2330 | 0.6506 | 0.6314 | -2.2330 | -1.6017 | 0.2142 | 0.1178 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
tomaarsen/mpnet-base-nq-cgist-triplet-gt
|
tomaarsen
| 2024-11-20T14:18:30Z | 15 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:50000",
"loss:CachedGISTEmbedLoss",
"en",
"dataset:tomaarsen/gooaq-hard-negatives",
"arxiv:1908.10084",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-20T14:18:00Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:50000
- loss:CachedGISTEmbedLoss
base_model: microsoft/mpnet-base
widget:
- source_sentence: what does the accounts receivable turnover measure?
sentences:
- The accounts receivable turnover ratio is an accounting measure used to quantify
a company's effectiveness in collecting its receivables or money owed by clients.
The ratio shows how well a company uses and manages the credit it extends to customers
and how quickly that short-term debt is collected or is paid.
- Capital budgeting, and investment appraisal, is the planning process used to determine
whether an organization's long term investments such as new machinery, replacement
of machinery, new plants, new products, and research development projects are
worth the funding of cash through the firm's capitalization structure ( ...
- The accounts receivable turnover ratio is an accounting measure used to quantify
a company's effectiveness in collecting its receivables or money owed by clients.
The ratio shows how well a company uses and manages the credit it extends to customers
and how quickly that short-term debt is collected or is paid.
- source_sentence: does gabapentin cause liver problems?
sentences:
- Gabapentin has no appreciable liver metabolism, yet, suspected cases of gabapentin-induced
hepatotoxicity have been reported. Per literature review, two cases of possible
gabapentin-induced liver injury have been reported.
- Strongholds are a type of story mission which only unlocks after enough progression
through the game. There are three Stronghold's during the first section of progression
through The Division 2. You'll need to complete the first two and have reached
level 30 before being able to unlock the final Stronghold.
- The most-common side effects attributed to Gabapentin include mild sedation, ataxia,
and occasional diarrhea. Sedation can be minimized by tapering from a smaller
starting dose to the desired dose. When treating seizures, it is ideal to wean
off the drug to reduce the risk of withdrawal seizures.
- source_sentence: how long should you wait to give blood after eating?
sentences:
- Until the bleeding has stopped it is natural to taste blood or to see traces of
blood in your saliva. You may stop using gauze after the flow stops โ usually
around 8 hours after surgery.
- Before donation The first and most important ruleโnever donate blood on an empty
stomach. โEat a wholesome meal about 2-3 hours before donating to keep your blood
sugar stable," says Dr Chaturvedi. The timing of the meal is important too. You
need to allow the food to be digested properly before the blood is drawn.
- While grid computing involves virtualizing computing resources to store massive
amounts of data, whereas cloud computing is where an application doesn't access
resources directly, rather it accesses them through a service over the internet.
...
- source_sentence: what is the difference between chicken francese and chicken marsala?
sentences:
- Chicken is the species name, equivalent to our โhuman.โ Rooster is an adult male,
equivalent to โman.โ Hen is an adult female, equivalent to โwoman.โ Cockerel is
a juvenile male, equivalent to โboy/young man.โ
- What is 99 kg in pounds? - 99 kg is equal to 218.26 pounds.
- The difference between the two is for Francese, the chicken breast is first dipped
in flour, then into a beaten egg mixture, before being cooked. For piccata, the
chicken is first dipped in egg and then in flour. Both are then simmered in a
lemony butter sauce, but the piccata sauce includes capers.โ
- source_sentence: what energy is released when coal is burned?
sentences:
- When coal is burned, it reacts with the oxygen in the air. This chemical reaction
converts the stored solar energy into thermal energy, which is released as heat.
But it also produces carbon dioxide and methane.
- When coal is burned it releases a number of airborne toxins and pollutants. They
include mercury, lead, sulfur dioxide, nitrogen oxides, particulates, and various
other heavy metals.
- Squad Building Challenges allow you to exchange sets of players for coins, packs,
and special items in FUT 20. Each of these challenges come with specific requirements,
such as including players from certain teams. ... Live SBCs are time-limited challenges
which often give out unique, high-rated versions of players.
datasets:
- tomaarsen/gooaq-hard-negatives
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
co2_eq_emissions:
emissions: 40.414352043491995
energy_consumed: 0.10397258579449552
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.297
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: MPNet base trained on Natural Questions pairs
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.22
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.44
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.22
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666663
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11599999999999999
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.092
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.09333333333333332
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.195
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.22666666666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.36733333333333335
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.27399441682859144
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.36331746031746026
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.20383864617106084
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.46
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.64
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.76
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.84
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.46
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3760000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.34
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03065300183409328
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.07824513873584021
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.12190077086725051
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.21649668807903738
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.38922276974007985
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5768571428571428
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.27830317958127815
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.54
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.58
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07200000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.37
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.52
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.57
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.68
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5226736410648857
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4790238095238095
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4780826341570998
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.28
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.52
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.58
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.28
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09599999999999997
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1371904761904762
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.32535714285714284
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3653571428571428
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.42940476190476184
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.34407947120145826
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3881666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2878001762783797
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.58
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.64
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.74
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21333333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14800000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.094
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.32
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.37
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.47
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3814617400581295
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4695793650793651
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.30635937490171045
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.12
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.66
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.12
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11200000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.066
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.12
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.56
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.66
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3660111210554949
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.27407936507936503
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2870484235732714
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.32
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.44
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.32
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.21333333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.196
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.144
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.012173283062756207
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.02038195250132044
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.028711609969173105
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.04001132454412933
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.17348600988460589
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.37288095238095237
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.04905096401443591
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.16
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.38
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.46
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.58
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.16
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.12666666666666665
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09200000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06000000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.15
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.36
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.43
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.54
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.34266003995975836
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.2898015873015873
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.29298840458552056
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.8
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.96
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3666666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.244
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7006666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8553333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8993333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9566666666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8760206475896655
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8545238095238095
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8464432234432234
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.36
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.48
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.54
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.68
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.36
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25999999999999995
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.21600000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.15200000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.07566666666666669
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.16166666666666665
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.22266666666666668
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.31466666666666665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2989790025477086
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.44196825396825395
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.23644335943955802
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.18
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.56
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.64
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.84
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.18
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08399999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.18
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.56
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.64
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.84
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5000949127836057
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.392515873015873
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4000896669795642
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.36
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.46
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.48
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.36
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666663
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10400000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.068
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.325
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.44
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.46
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.605
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4619142281323308
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.43007936507936506
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.42744465636932816
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.5306122448979592
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7142857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8571428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9591836734693877
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5306122448979592
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.44217687074829926
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4122448979591837
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.3530612244897959
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03881638827876476
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.09899647775241191
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.14503016807403868
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.23921899286976872
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.40436873020254516
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6520651117589893
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3151658515102787
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.3469701726844584
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5303296703296704
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6074725274725276
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7214756671899529
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3469701726844584
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23401360544217684
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18648037676609108
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13500470957613817
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.18488460123328906
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.32576774706513195
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3876666429564824
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.48913834108187415
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4103820562345277
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4603737509655877
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3391583508465161
name: Cosine Map@100
---
# MPNet base trained on Natural Questions pairs
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("tomaarsen/mpnet-base-nq-cgist-triplet-gt")
# Run inference
sentences = [
'what energy is released when coal is burned?',
'When coal is burned, it reacts with the oxygen in the air. This chemical reaction converts the stored solar energy into thermal energy, which is released as heat. But it also produces carbon dioxide and methane.',
'When coal is burned it releases a number of airborne toxins and pollutants. They include mercury, lead, sulfur dioxide, nitrogen oxides, particulates, and various other heavy metals.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.22 | 0.46 | 0.38 | 0.28 | 0.34 | 0.12 | 0.32 | 0.16 | 0.8 | 0.36 | 0.18 | 0.36 | 0.5306 |
| cosine_accuracy@3 | 0.44 | 0.64 | 0.54 | 0.5 | 0.58 | 0.3 | 0.4 | 0.38 | 0.9 | 0.48 | 0.56 | 0.46 | 0.7143 |
| cosine_accuracy@5 | 0.5 | 0.76 | 0.58 | 0.52 | 0.64 | 0.56 | 0.44 | 0.46 | 0.92 | 0.54 | 0.64 | 0.48 | 0.8571 |
| cosine_accuracy@10 | 0.72 | 0.84 | 0.7 | 0.58 | 0.74 | 0.66 | 0.5 | 0.58 | 0.96 | 0.68 | 0.84 | 0.62 | 0.9592 |
| cosine_precision@1 | 0.22 | 0.46 | 0.38 | 0.28 | 0.34 | 0.12 | 0.32 | 0.16 | 0.8 | 0.36 | 0.18 | 0.36 | 0.5306 |
| cosine_precision@3 | 0.1667 | 0.4 | 0.18 | 0.22 | 0.2133 | 0.1 | 0.2133 | 0.1267 | 0.3667 | 0.26 | 0.1867 | 0.1667 | 0.4422 |
| cosine_precision@5 | 0.116 | 0.376 | 0.12 | 0.16 | 0.148 | 0.112 | 0.196 | 0.092 | 0.244 | 0.216 | 0.128 | 0.104 | 0.4122 |
| cosine_precision@10 | 0.092 | 0.34 | 0.072 | 0.096 | 0.094 | 0.066 | 0.144 | 0.06 | 0.134 | 0.152 | 0.084 | 0.068 | 0.3531 |
| cosine_recall@1 | 0.0933 | 0.0307 | 0.37 | 0.1372 | 0.17 | 0.12 | 0.0122 | 0.15 | 0.7007 | 0.0757 | 0.18 | 0.325 | 0.0388 |
| cosine_recall@3 | 0.195 | 0.0782 | 0.52 | 0.3254 | 0.32 | 0.3 | 0.0204 | 0.36 | 0.8553 | 0.1617 | 0.56 | 0.44 | 0.099 |
| cosine_recall@5 | 0.2267 | 0.1219 | 0.57 | 0.3654 | 0.37 | 0.56 | 0.0287 | 0.43 | 0.8993 | 0.2227 | 0.64 | 0.46 | 0.145 |
| cosine_recall@10 | 0.3673 | 0.2165 | 0.68 | 0.4294 | 0.47 | 0.66 | 0.04 | 0.54 | 0.9567 | 0.3147 | 0.84 | 0.605 | 0.2392 |
| **cosine_ndcg@10** | **0.274** | **0.3892** | **0.5227** | **0.3441** | **0.3815** | **0.366** | **0.1735** | **0.3427** | **0.876** | **0.299** | **0.5001** | **0.4619** | **0.4044** |
| cosine_mrr@10 | 0.3633 | 0.5769 | 0.479 | 0.3882 | 0.4696 | 0.2741 | 0.3729 | 0.2898 | 0.8545 | 0.442 | 0.3925 | 0.4301 | 0.6521 |
| cosine_map@100 | 0.2038 | 0.2783 | 0.4781 | 0.2878 | 0.3064 | 0.287 | 0.0491 | 0.293 | 0.8464 | 0.2364 | 0.4001 | 0.4274 | 0.3152 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.347 |
| cosine_accuracy@3 | 0.5303 |
| cosine_accuracy@5 | 0.6075 |
| cosine_accuracy@10 | 0.7215 |
| cosine_precision@1 | 0.347 |
| cosine_precision@3 | 0.234 |
| cosine_precision@5 | 0.1865 |
| cosine_precision@10 | 0.135 |
| cosine_recall@1 | 0.1849 |
| cosine_recall@3 | 0.3258 |
| cosine_recall@5 | 0.3877 |
| cosine_recall@10 | 0.4891 |
| **cosine_ndcg@10** | **0.4104** |
| cosine_mrr@10 | 0.4604 |
| cosine_map@100 | 0.3392 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### gooaq-hard-negatives
* Dataset: [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) at [87594a1](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives/tree/87594a1e6c58e88b5843afa9da3a97ffd75d01c2)
* Size: 50,000 training samples
* Columns: <code>question</code>, <code>answer</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.53 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 59.79 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 58.76 tokens</li><li>max: 143 tokens</li></ul> |
* Samples:
| question | answer | negative |
|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what is the difference between calories from fat and total fat?</code> | <code>Fat has more than twice as many calories per gram as carbohydrates and proteins. A gram of fat has about 9 calories, while a gram of carbohydrate or protein has about 4 calories. In other words, you could eat twice as much carbohydrates or proteins as fat for the same amount of calories.</code> | <code>Fat has more than twice as many calories per gram as carbohydrates and proteins. A gram of fat has about 9 calories, while a gram of carbohydrate or protein has about 4 calories. In other words, you could eat twice as much carbohydrates or proteins as fat for the same amount of calories.</code> |
| <code>what is the difference between return transcript and account transcript?</code> | <code>A tax return transcript usually meets the needs of lending institutions offering mortgages and student loans. ... Tax Account Transcript - shows basic data such as return type, marital status, adjusted gross income, taxable income and all payment types. It also shows changes made after you filed your original return.</code> | <code>Trial balance is not a financial statement whereas a balance sheet is a financial statement. Trial balance is solely used for internal purposes whereas a balance sheet is used for purposes other than internal i.e. external. In a trial balance, each and every account is divided into debit (dr.) and credit (cr.)</code> |
| <code>how long does my dog need to fast before sedation?</code> | <code>Now, guidelines are aimed towards 6-8 hours before surgery. This pre-op fasting time is much more beneficial for your pets because you have enough food in there to neutralize the stomach acid, preventing it from coming up the esophagus that causes regurgitation under anesthetic.</code> | <code>Try not to let your pooch rapidly wolf down his/her food! Do not let the dog play or exercise (e.g. go for a walk) for at least two hours after having a meal. Ensure continuous fresh water is available to avoid your pet gulping down a large amount after eating.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Evaluation Dataset
#### gooaq-hard-negatives
* Dataset: [gooaq-hard-negatives](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) at [87594a1](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives/tree/87594a1e6c58e88b5843afa9da3a97ffd75d01c2)
* Size: 10,048,700 evaluation samples
* Columns: <code>question</code>, <code>answer</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.61 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 58.16 tokens</li><li>max: 131 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 57.98 tokens</li><li>max: 157 tokens</li></ul> |
* Samples:
| question | answer | negative |
|:--------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>how is height width and length written?</code> | <code>The Graphics' industry standard is width by height (width x height). Meaning that when you write your measurements, you write them from your point of view, beginning with the width.</code> | <code>The Graphics' industry standard is width by height (width x height). Meaning that when you write your measurements, you write them from your point of view, beginning with the width. That's important.</code> |
| <code>what is the difference between pork shoulder and loin?</code> | <code>All the recipes I've found for pulled pork recommends a shoulder/butt. Shoulders take longer to cook than a loin, because they're tougher. Loins are lean, while shoulders have marbled fat inside.</code> | <code>They are extracted from the loin, which runs from the hip to the shoulder, and it has a small strip of meat called the tenderloin. Unlike other pork, this pork chop is cut from four major sections, which are the shoulder, also known as the blade chops, ribs chops, loin chops, and the last, which is the sirloin chops.</code> |
| <code>is the yin yang symbol religious?</code> | <code>The ubiquitous yin-yang symbol holds its roots in Taoism/Daoism, a Chinese religion and philosophy. The yin, the dark swirl, is associated with shadows, femininity, and the trough of a wave; the yang, the light swirl, represents brightness, passion and growth.</code> | <code>Yin energy is in the calm colors around you, in the soft music, in the soothing sound of a water fountain, or the relaxing images of water. Yang (active energy) is the feng shui energy expressed in strong, vibrant sounds and colors, bright lights, upward moving energy, tall plants, etc.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:-----:|:----:|:-------------:|:---------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.04 | 1 | 11.5142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2 | 5 | 9.438 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4 | 10 | 5.5516 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6 | 15 | 3.7045 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8 | 20 | 2.7618 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 25 | 2.1657 | 1.3177 | 0.2740 | 0.3892 | 0.5227 | 0.3441 | 0.3815 | 0.3660 | 0.1735 | 0.3427 | 0.8760 | 0.2990 | 0.5001 | 0.4619 | 0.4044 | 0.4104 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.104 kWh
- **Carbon Emitted**: 0.040 kg of CO2
- **Hours Used**: 0.297 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.35.0.dev0
- Datasets: 2.20.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ryefoxlime/gemma-2-2b-it-therapist
|
ryefoxlime
| 2024-11-20T14:15:00Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma2",
"text-generation",
"conversational",
"en",
"dataset:nbertagnolli/counsel-chat",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b-it",
"base_model:quantized:google/gemma-2-2b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T19:10:27Z |
---
library_name: transformers
datasets:
- nbertagnolli/counsel-chat
language:
- en
base_model:
- google/gemma-2-2b-it
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eamonmckenna/whisper-small-ga-ie-4000
|
eamonmckenna
| 2024-11-20T14:07:25Z | 26 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"Irish",
"ASR",
"Speech_to_text",
"Gaeilge",
"ga",
"gd",
"gv",
"dataset:google/fleurs",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-18T09:11:50Z |
---
license: apache-2.0
datasets:
- google/fleurs
- mozilla-foundation/common_voice_17_0
language:
- ga
- gd
- gv
metrics:
- accuracy
base_model:
- openai/whisper-small
pipeline_tag: automatic-speech-recognition
tags:
- Irish
- ASR
- Speech_to_text
- Gaeilge
---
|
OpenVINO/gemma-2b-it-fp16-ov
|
OpenVINO
| 2024-11-20T14:04:42Z | 67 | 0 | null |
[
"openvino",
"gemma",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"license:gemma",
"region:us"
] | null | 2024-10-30T07:07:59Z |
---
license: gemma
license_link: https://choosealicense.com/licenses/gemma/
base_model: google/gemma-2b-it
---
# gemma-2b-it-fp16-ov
* Model creator: [google](https://huggingface.co/google)
* Original model: [gemma-2b-it](https://huggingface.co/google/gemma-2b-it)
## Description
## Compatibility
The provided OpenVINOโข IR model is compatible with:
* OpenVINO version 2024.5.0 and higher
* Optimum Intel 1.21.0 and higher
## Running Model Inference
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/gemma-2b-it-fp16-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
## Limitations
Check the original model card for [original model card](https://huggingface.co/google/gemma-2b-it) for limitations.
## Legal information
The original model is distributed under [gemma](https://choosealicense.com/licenses/gemma/) license. More details can be found in [original model card](https://huggingface.co/google/gemma-2b-it).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intelโs Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intelโs products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
OpenVINO/gemma-2b-int4-ov
|
OpenVINO
| 2024-11-20T13:58:40Z | 16 | 0 | null |
[
"openvino",
"gemma",
"base_model:google/gemma-2b",
"base_model:quantized:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-10-30T07:31:54Z |
---
license: gemma
license_link: https://choosealicense.com/licenses/gemma/
base_model: google/gemma-2b
base_model_relation: quantized
---
# gemma-2b-int4-ov
* Model creator: [google](https://huggingface.co/google)
* Original model: [gemma-2b](https://huggingface.co/google/gemma-2b)
## Description
This is [gemma-2b](https://huggingface.co/google/gemma-2b) model converted to the [OpenVINOโข IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf).
## Quantization Parameters
Weight compression was performed using `nncf.compress_weights` with the following parameters:
* mode: **int4_asym**
* ratio: **1**
* group_size: **128**
For more information on quantization, check the [OpenVINO model optimization guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html).
## Compatibility
The provided OpenVINOโข IR model is compatible with:
* OpenVINO version 2024.5.0 and higher
* Optimum Intel 1.21.0 and higher
## Running Model Inference
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
```
pip install optimum[openvino]
```
2. Run model inference:
```
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM
model_id = "OpenVINO/gemma-2b-int4-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("What is OpenVINO?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
## Limitations
Check the original model card for [original model card](https://huggingface.co/google/gemma-2b) for limitations.
## Legal information
The original model is distributed under [gemma](https://choosealicense.com/licenses/gemma/) license. More details can be found in [original model card](https://huggingface.co/google/gemma-2b).
## Disclaimer
Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intelโs Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intelโs products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.
|
ijohn07/french-english-gte-base
|
ijohn07
| 2024-11-20T13:55:05Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"passage-retrieval",
"sentence-similarity",
"pruned",
"custom_code",
"fr",
"en",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:quantized:Alibaba-NLP/gte-multilingual-base",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-20T13:37:10Z |
---
pipeline_tag: sentence-similarity
language:
- fr
- en
license: apache-2.0
tags:
- passage-retrieval
- sentence-similarity
- pruned
library_name: sentence-transformers
base_model: Alibaba-NLP/gte-multilingual-base
base_model_relation: quantized
---
# ๐ซ๐ท french-gte-multilingual-base
This model is a 51.4% smaller version of [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base)
for the French and English language, created using the [mtem-pruner](https://huggingface.co/spaces/antoinelouis/mtem-pruner) space.
This pruned model should perform similarly to the original model for French and English language tasks with a much smaller
memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not
commonly used in French and English were removed from the original multilingual model's vocabulary.
## Usage
You can use this model with the Transformers library:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "ijohn07/french-english-gte-base"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
```
Or with the sentence-transformers library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("ijohn07/french-english-gte-base")
```
**Credits**: cc [@antoinelouis](https://huggingface.co/antoinelouis)
|
PrunaAI/NESPED-GEN-StableCode-schemaLinking-dict-bnb-8bit-smashed
|
PrunaAI
| 2024-11-20T13:54:05Z | 5 | 0 | null |
[
"safetensors",
"stablelm",
"pruna-ai",
"base_model:NESPED-GEN/StableCode-schemaLinking-dict",
"base_model:quantized:NESPED-GEN/StableCode-schemaLinking-dict",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-11-20T13:50:12Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: NESPED-GEN/StableCode-schemaLinking-dict
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NESPED-GEN/StableCode-schemaLinking-dict installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/NESPED-GEN-StableCode-schemaLinking-dict-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NESPED-GEN/StableCode-schemaLinking-dict")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NESPED-GEN/StableCode-schemaLinking-dict before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
davanstrien/fineweb-edu-classifier-onnx
|
davanstrien
| 2024-11-20T13:48:37Z | 121 | 1 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:HuggingFaceFW/fineweb-edu-llama3-annotations",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-20T13:27:41Z |
---
language:
- en
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb-edu-llama3-annotations
library_name: transformers
---
# FineWeb-Edu classifier
## Model summary
This is a classifier for judging the educational value of web pages. It was developed to filter and curate educational content from web datasets and was trained on 450k [annotations](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-llama3-annotations) generated by [LLama3-70B-instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) for web samples from [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) dataset.
We used this classifier to build [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) dataset.
### How to use in transformers
To load the FineWeb-Edu classifier, use the following code:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/fineweb-edu-classifier")
model = AutoModelForSequenceClassification.from_pretrained("HuggingFaceTB/fineweb-edu-classifier")
text = "This is a test sentence."
inputs = tokenizer(text, return_tensors="pt", padding="longest", truncation=True)
outputs = model(**inputs)
logits = outputs.logits.squeeze(-1).float().detach().numpy()
score = logits.item()
result = {
"text": text,
"score": score,
"int_score": int(round(max(0, min(score, 5)))),
}
print(result)
# {'text': 'This is a test sentence.', 'score': 0.07964489609003067, 'int_score': 0}
```
## Training
The classifier was trained on 450,000 pairs of web samples and their scores from 0 to 5, generated by Llama3. The samples were annotated based on their educational quality with 0 being not educational and 5 being highly educational.
Below is the prompt used for LLama3 annotations:
<div style="text-align: center; margin: 20px 0;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/fjZQ4izIj1rx1xQnBTKKr.png" alt="Prompt for LLM annotation" style="width: 90%; max-width: 800px; height: auto;">
</div>
We added a classification head with a single regression output to [Snowflake-arctic-embed](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) and trained the model for 20 epochs with a learning rate of 3e-4. During training, the embedding and encoder layers were frozen to focus on the classification head. The model achieved an F1 score of 82% when converted to a binary classifier using a score threshold of 3.
**Training Details:**
- Model: Snowflake-arctic-embed with a classification head
- Dataset: 450,000 samples from Llama3 annotations
- Epochs: 20
- Learning Rate: 3e-4
- Evaluation Metric: F1 score
**Classification report**
We treat the regression model's predictions as discrete classes to calculate the metrics on a hold-out set of 46867 Llama3-annotated samples.
```
precision recall f1-score support
0 0.75 0.49 0.59 5694
1 0.78 0.84 0.81 26512
2 0.57 0.61 0.59 10322
3 0.56 0.50 0.53 3407
4 0.58 0.35 0.44 807
5 0.33 0.01 0.02 125
accuracy 0.71 46867
macro avg 0.60 0.47 0.50 46867
weighted avg 0.71 0.71 0.71 46867
```
**Confusion matrix**
We verify that the predicted educational scores are indeed close to their ground truth, and are mostry impacted by the noisy annotation.
```
2791 2858 45 0 0 0
919 22343 3180 69 1 0
y_true 3 3225 6330 757 7 0
1 66 1473 1694 173 0
0 4 98 420 283 2
0 0 18 85 21 1
y_pred
```
## Limitations
While the FineWeb-Edu classifier performs well in distinguishing high-quality educational content for FineWeb dataset, there are some limitations:
- Scope: The model's performance might change for other datasets, in particular for out of distribution samples. It is also focused on educational content relevant to primary and grade school levels and may not perform as well on content intended for higher education or specialized domains.
- Bias: The model's performance is dependent on the quality and representativeness of the training data and the LLM used for the annotation. Biases in both can affect the classifier's judgments. It might overfit to academic looking content for the higher scores and we recommend using int_score >= 3 as a threshold for data curation.
- Context: The classifier evaluates individual web pages or extracts without considering broader context, which might impact its effectiveness in certain scenarios.
The training and inference code is available on GitHub
https://github.com/huggingface/cosmopedia/tree/main/classification
|
danelcsb/vitpose-plus-base
|
danelcsb
| 2024-11-20T13:47:59Z | 54 | 1 |
transformers
|
[
"transformers",
"safetensors",
"vitpose",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T13:46:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ijohn07/DeepSeek-Coder-V2-Lite-Base-Q8_0-GGUF
|
ijohn07
| 2024-11-20T13:43:53Z | 27 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-19T16:46:21Z |
---
license: other
license_name: deepseek-license
license_link: LICENSE
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Base
---
# ijohn07/DeepSeek-Coder-V2-Lite-Base-Q8_0-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-Coder-V2-Lite-Base`](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ijohn07/DeepSeek-Coder-V2-Lite-Base-Q8_0-GGUF --hf-file deepseek-coder-v2-lite-base-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ijohn07/DeepSeek-Coder-V2-Lite-Base-Q8_0-GGUF --hf-file deepseek-coder-v2-lite-base-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ijohn07/DeepSeek-Coder-V2-Lite-Base-Q8_0-GGUF --hf-file deepseek-coder-v2-lite-base-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ijohn07/DeepSeek-Coder-V2-Lite-Base-Q8_0-GGUF --hf-file deepseek-coder-v2-lite-base-q8_0.gguf -c 2048
```
|
MayBashendy/Arabic_FineTuningAraBERT_AugV4_k20_task3_organization_fold1
|
MayBashendy
| 2024-11-20T13:43:22Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-13T03:25:23Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV4_k20_task3_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV4_k20_task3_organization_fold1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8466
- Qwk: -0.1748
- Mse: 0.8466
- Rmse: 0.9201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0043 | 2 | 5.0662 | 0.0562 | 5.0662 | 2.2508 |
| No log | 0.0087 | 4 | 2.3744 | 0.1692 | 2.3744 | 1.5409 |
| No log | 0.0130 | 6 | 0.5562 | 0.6944 | 0.5562 | 0.7458 |
| No log | 0.0174 | 8 | 1.1519 | -0.2692 | 1.1519 | 1.0733 |
| No log | 0.0217 | 10 | 1.3157 | -0.3095 | 1.3157 | 1.1471 |
| No log | 0.0260 | 12 | 1.3726 | -0.3095 | 1.3726 | 1.1716 |
| No log | 0.0304 | 14 | 1.1290 | -0.4444 | 1.1290 | 1.0625 |
| No log | 0.0347 | 16 | 0.9114 | 0.0 | 0.9114 | 0.9547 |
| No log | 0.0390 | 18 | 0.8245 | 0.0 | 0.8245 | 0.9080 |
| No log | 0.0434 | 20 | 1.0369 | -0.0845 | 1.0369 | 1.0183 |
| No log | 0.0477 | 22 | 1.1403 | 0.0 | 1.1403 | 1.0678 |
| No log | 0.0521 | 24 | 0.9033 | 0.0222 | 0.9033 | 0.9504 |
| No log | 0.0564 | 26 | 0.9333 | 0.0222 | 0.9333 | 0.9661 |
| No log | 0.0607 | 28 | 0.8953 | 0.0222 | 0.8953 | 0.9462 |
| No log | 0.0651 | 30 | 1.0668 | -0.0645 | 1.0668 | 1.0328 |
| No log | 0.0694 | 32 | 1.2762 | 0.0 | 1.2762 | 1.1297 |
| No log | 0.0738 | 34 | 1.4528 | 0.0 | 1.4528 | 1.2053 |
| No log | 0.0781 | 36 | 1.0889 | 0.0884 | 1.0889 | 1.0435 |
| No log | 0.0824 | 38 | 0.8171 | 0.0 | 0.8171 | 0.9040 |
| No log | 0.0868 | 40 | 0.9300 | -0.3276 | 0.9300 | 0.9644 |
| No log | 0.0911 | 42 | 1.0346 | -0.0476 | 1.0346 | 1.0172 |
| No log | 0.0954 | 44 | 1.2382 | -0.0097 | 1.2382 | 1.1127 |
| No log | 0.0998 | 46 | 1.3373 | 0.0 | 1.3373 | 1.1564 |
| No log | 0.1041 | 48 | 1.1701 | -0.0097 | 1.1701 | 1.0817 |
| No log | 0.1085 | 50 | 1.1217 | -0.0206 | 1.1217 | 1.0591 |
| No log | 0.1128 | 52 | 1.2706 | 0.0 | 1.2706 | 1.1272 |
| No log | 0.1171 | 54 | 1.4865 | 0.0 | 1.4865 | 1.2192 |
| No log | 0.1215 | 56 | 1.0426 | 0.3452 | 1.0426 | 1.0211 |
| No log | 0.1258 | 58 | 1.1278 | -0.0206 | 1.1278 | 1.0620 |
| No log | 0.1302 | 60 | 2.1569 | 0.0 | 2.1569 | 1.4686 |
| No log | 0.1345 | 62 | 2.3581 | 0.0 | 2.3581 | 1.5356 |
| No log | 0.1388 | 64 | 1.7302 | 0.0 | 1.7302 | 1.3154 |
| No log | 0.1432 | 66 | 0.8241 | 0.2414 | 0.8241 | 0.9078 |
| No log | 0.1475 | 68 | 0.6226 | 0.0 | 0.6226 | 0.7890 |
| No log | 0.1518 | 70 | 0.6655 | 0.0 | 0.6655 | 0.8158 |
| No log | 0.1562 | 72 | 0.7954 | 0.0 | 0.7954 | 0.8919 |
| No log | 0.1605 | 74 | 1.0642 | -0.0206 | 1.0642 | 1.0316 |
| No log | 0.1649 | 76 | 1.1631 | -0.1340 | 1.1631 | 1.0785 |
| No log | 0.1692 | 78 | 1.4101 | 0.0 | 1.4101 | 1.1875 |
| No log | 0.1735 | 80 | 1.4632 | 0.0 | 1.4632 | 1.2096 |
| No log | 0.1779 | 82 | 1.1959 | -0.1547 | 1.1959 | 1.0936 |
| No log | 0.1822 | 84 | 0.9702 | 0.0517 | 0.9702 | 0.9850 |
| No log | 0.1866 | 86 | 1.2792 | -0.1547 | 1.2792 | 1.1310 |
| No log | 0.1909 | 88 | 1.6128 | 0.0 | 1.6128 | 1.2700 |
| No log | 0.1952 | 90 | 1.4251 | -0.1159 | 1.4251 | 1.1938 |
| No log | 0.1996 | 92 | 1.4436 | -0.1159 | 1.4436 | 1.2015 |
| No log | 0.2039 | 94 | 1.6535 | 0.0 | 1.6535 | 1.2859 |
| No log | 0.2082 | 96 | 1.6642 | 0.0 | 1.6642 | 1.2900 |
| No log | 0.2126 | 98 | 1.6197 | 0.0 | 1.6197 | 1.2727 |
| No log | 0.2169 | 100 | 1.6019 | -0.0097 | 1.6019 | 1.2657 |
| No log | 0.2213 | 102 | 1.6511 | 0.0 | 1.6511 | 1.2850 |
| No log | 0.2256 | 104 | 1.7047 | 0.0 | 1.7047 | 1.3057 |
| No log | 0.2299 | 106 | 1.7458 | 0.0 | 1.7458 | 1.3213 |
| No log | 0.2343 | 108 | 1.5336 | -0.0097 | 1.5336 | 1.2384 |
| No log | 0.2386 | 110 | 1.5067 | -0.0206 | 1.5067 | 1.2275 |
| No log | 0.2430 | 112 | 1.7453 | 0.0 | 1.7453 | 1.3211 |
| No log | 0.2473 | 114 | 1.6037 | 0.0 | 1.6037 | 1.2664 |
| No log | 0.2516 | 116 | 1.4906 | 0.0 | 1.4906 | 1.2209 |
| No log | 0.2560 | 118 | 1.1787 | -0.0206 | 1.1787 | 1.0857 |
| No log | 0.2603 | 120 | 1.1812 | -0.0206 | 1.1812 | 1.0868 |
| No log | 0.2646 | 122 | 1.5331 | 0.0 | 1.5331 | 1.2382 |
| No log | 0.2690 | 124 | 1.7806 | 0.0 | 1.7806 | 1.3344 |
| No log | 0.2733 | 126 | 1.4523 | 0.2099 | 1.4523 | 1.2051 |
| No log | 0.2777 | 128 | 1.4744 | 0.2029 | 1.4744 | 1.2142 |
| No log | 0.2820 | 130 | 1.6868 | 0.0 | 1.6868 | 1.2988 |
| No log | 0.2863 | 132 | 1.6388 | -0.0097 | 1.6388 | 1.2802 |
| No log | 0.2907 | 134 | 1.6840 | -0.0097 | 1.6840 | 1.2977 |
| No log | 0.2950 | 136 | 1.7372 | 0.0 | 1.7372 | 1.3180 |
| No log | 0.2993 | 138 | 1.6976 | 0.0 | 1.6976 | 1.3029 |
| No log | 0.3037 | 140 | 1.6619 | 0.0 | 1.6619 | 1.2891 |
| No log | 0.3080 | 142 | 1.6420 | 0.0 | 1.6420 | 1.2814 |
| No log | 0.3124 | 144 | 1.3727 | -0.0206 | 1.3727 | 1.1716 |
| No log | 0.3167 | 146 | 1.5149 | 0.0 | 1.5149 | 1.2308 |
| No log | 0.3210 | 148 | 1.6476 | 0.0 | 1.6476 | 1.2836 |
| No log | 0.3254 | 150 | 1.6920 | 0.0 | 1.6920 | 1.3008 |
| No log | 0.3297 | 152 | 1.3687 | -0.0206 | 1.3687 | 1.1699 |
| No log | 0.3341 | 154 | 1.2953 | -0.0206 | 1.2953 | 1.1381 |
| No log | 0.3384 | 156 | 1.4127 | -0.0097 | 1.4127 | 1.1886 |
| No log | 0.3427 | 158 | 1.3350 | -0.0206 | 1.3350 | 1.1554 |
| No log | 0.3471 | 160 | 1.3169 | 0.0884 | 1.3169 | 1.1476 |
| No log | 0.3514 | 162 | 1.6885 | -0.0206 | 1.6885 | 1.2994 |
| No log | 0.3557 | 164 | 1.6339 | -0.0206 | 1.6339 | 1.2782 |
| No log | 0.3601 | 166 | 1.6417 | 0.0833 | 1.6417 | 1.2813 |
| No log | 0.3644 | 168 | 2.3745 | -0.0087 | 2.3745 | 1.5409 |
| No log | 0.3688 | 170 | 2.4787 | 0.0833 | 2.4787 | 1.5744 |
| No log | 0.3731 | 172 | 1.7118 | 0.0833 | 1.7118 | 1.3084 |
| No log | 0.3774 | 174 | 1.7248 | -0.0087 | 1.7248 | 1.3133 |
| No log | 0.3818 | 176 | 1.4414 | -0.0097 | 1.4414 | 1.2006 |
| No log | 0.3861 | 178 | 0.9418 | 0.0833 | 0.9418 | 0.9705 |
| No log | 0.3905 | 180 | 0.8870 | 0.2414 | 0.8870 | 0.9418 |
| No log | 0.3948 | 182 | 1.0472 | 0.0884 | 1.0472 | 1.0233 |
| No log | 0.3991 | 184 | 1.4630 | 0.0 | 1.4630 | 1.2095 |
| No log | 0.4035 | 186 | 1.2767 | 0.0928 | 1.2767 | 1.1299 |
| No log | 0.4078 | 188 | 0.7986 | 0.4310 | 0.7986 | 0.8936 |
| No log | 0.4121 | 190 | 0.6140 | 0.0 | 0.6140 | 0.7836 |
| No log | 0.4165 | 192 | 0.6122 | 0.0 | 0.6122 | 0.7825 |
| No log | 0.4208 | 194 | 0.7978 | 0.3803 | 0.7978 | 0.8932 |
| No log | 0.4252 | 196 | 1.3868 | -0.0206 | 1.3868 | 1.1776 |
| No log | 0.4295 | 198 | 1.9409 | 0.0 | 1.9409 | 1.3932 |
| No log | 0.4338 | 200 | 1.9997 | 0.0 | 1.9997 | 1.4141 |
| No log | 0.4382 | 202 | 1.3264 | 0.0884 | 1.3264 | 1.1517 |
| No log | 0.4425 | 204 | 0.8400 | 0.0222 | 0.8400 | 0.9165 |
| No log | 0.4469 | 206 | 0.8438 | 0.0222 | 0.8438 | 0.9186 |
| No log | 0.4512 | 208 | 1.2144 | 0.0884 | 1.2144 | 1.1020 |
| No log | 0.4555 | 210 | 1.9430 | 0.0 | 1.9430 | 1.3939 |
| No log | 0.4599 | 212 | 1.9468 | 0.0 | 1.9468 | 1.3953 |
| No log | 0.4642 | 214 | 1.4920 | -0.0097 | 1.4920 | 1.2215 |
| No log | 0.4685 | 216 | 1.1333 | 0.0884 | 1.1333 | 1.0646 |
| No log | 0.4729 | 218 | 1.0418 | -0.0476 | 1.0418 | 1.0207 |
| No log | 0.4772 | 220 | 1.2766 | -0.0206 | 1.2766 | 1.1299 |
| No log | 0.4816 | 222 | 1.5854 | 0.0 | 1.5854 | 1.2591 |
| No log | 0.4859 | 224 | 1.4566 | 0.0 | 1.4566 | 1.2069 |
| No log | 0.4902 | 226 | 1.3790 | -0.0097 | 1.3790 | 1.1743 |
| No log | 0.4946 | 228 | 1.1800 | 0.0884 | 1.1800 | 1.0863 |
| No log | 0.4989 | 230 | 1.0396 | 0.0833 | 1.0396 | 1.0196 |
| No log | 0.5033 | 232 | 1.1655 | 0.0884 | 1.1655 | 1.0796 |
| No log | 0.5076 | 234 | 1.7216 | -0.0097 | 1.7216 | 1.3121 |
| No log | 0.5119 | 236 | 1.8556 | -0.0097 | 1.8556 | 1.3622 |
| No log | 0.5163 | 238 | 1.4008 | -0.0206 | 1.4008 | 1.1835 |
| No log | 0.5206 | 240 | 0.9210 | -0.2791 | 0.9210 | 0.9597 |
| No log | 0.5249 | 242 | 0.8218 | 0.0388 | 0.8218 | 0.9065 |
| No log | 0.5293 | 244 | 0.9410 | -0.2791 | 0.9410 | 0.9700 |
| No log | 0.5336 | 246 | 0.9831 | -0.2394 | 0.9831 | 0.9915 |
| No log | 0.5380 | 248 | 1.0366 | -0.0476 | 1.0366 | 1.0182 |
| No log | 0.5423 | 250 | 1.1089 | 0.0884 | 1.1089 | 1.0530 |
| No log | 0.5466 | 252 | 1.1890 | 0.0884 | 1.1890 | 1.0904 |
| No log | 0.5510 | 254 | 1.3395 | 0.0884 | 1.3395 | 1.1574 |
| No log | 0.5553 | 256 | 1.6234 | -0.0097 | 1.6234 | 1.2741 |
| No log | 0.5597 | 258 | 1.6214 | 0.0 | 1.6214 | 1.2734 |
| No log | 0.5640 | 260 | 1.3883 | 0.0 | 1.3883 | 1.1782 |
| No log | 0.5683 | 262 | 1.1181 | 0.0884 | 1.1181 | 1.0574 |
| No log | 0.5727 | 264 | 0.9934 | -0.0476 | 0.9934 | 0.9967 |
| No log | 0.5770 | 266 | 0.9563 | 0.2414 | 0.9563 | 0.9779 |
| No log | 0.5813 | 268 | 0.9618 | -0.0645 | 0.9618 | 0.9807 |
| No log | 0.5857 | 270 | 0.9073 | -0.1085 | 0.9073 | 0.9525 |
| No log | 0.5900 | 272 | 1.0020 | -0.0476 | 1.0020 | 1.0010 |
| No log | 0.5944 | 274 | 1.2570 | -0.0097 | 1.2570 | 1.1212 |
| No log | 0.5987 | 276 | 1.2858 | 0.0884 | 1.2858 | 1.1339 |
| No log | 0.6030 | 278 | 1.5092 | -0.0097 | 1.5092 | 1.2285 |
| No log | 0.6074 | 280 | 1.5099 | 0.0928 | 1.5099 | 1.2288 |
| No log | 0.6117 | 282 | 1.3045 | 0.0884 | 1.3045 | 1.1422 |
| No log | 0.6161 | 284 | 1.4091 | -0.0097 | 1.4091 | 1.1871 |
| No log | 0.6204 | 286 | 1.3325 | -0.0097 | 1.3325 | 1.1543 |
| No log | 0.6247 | 288 | 1.2049 | 0.0928 | 1.2049 | 1.0977 |
| No log | 0.6291 | 290 | 1.0000 | -0.0645 | 1.0000 | 1.0000 |
| No log | 0.6334 | 292 | 1.0209 | -0.0645 | 1.0209 | 1.0104 |
| No log | 0.6377 | 294 | 1.1506 | 0.0884 | 1.1506 | 1.0727 |
| No log | 0.6421 | 296 | 1.2300 | 0.0884 | 1.2300 | 1.1091 |
| No log | 0.6464 | 298 | 1.5123 | 0.0 | 1.5123 | 1.2297 |
| No log | 0.6508 | 300 | 1.4709 | -0.0097 | 1.4709 | 1.2128 |
| No log | 0.6551 | 302 | 1.1026 | 0.0833 | 1.1026 | 1.0501 |
| No log | 0.6594 | 304 | 1.0543 | 0.0833 | 1.0543 | 1.0268 |
| No log | 0.6638 | 306 | 1.3746 | -0.0097 | 1.3746 | 1.1724 |
| No log | 0.6681 | 308 | 2.0208 | 0.0 | 2.0208 | 1.4216 |
| No log | 0.6725 | 310 | 1.8322 | 0.0 | 1.8322 | 1.3536 |
| No log | 0.6768 | 312 | 1.3155 | -0.0206 | 1.3155 | 1.1469 |
| No log | 0.6811 | 314 | 0.8459 | 0.0704 | 0.8459 | 0.9197 |
| No log | 0.6855 | 316 | 0.8941 | 0.0704 | 0.8941 | 0.9456 |
| No log | 0.6898 | 318 | 1.3748 | 0.0 | 1.3748 | 1.1725 |
| No log | 0.6941 | 320 | 1.4663 | 0.0 | 1.4663 | 1.2109 |
| No log | 0.6985 | 322 | 1.2884 | -0.0097 | 1.2884 | 1.1351 |
| No log | 0.7028 | 324 | 1.0192 | 0.0704 | 1.0192 | 1.0096 |
| No log | 0.7072 | 326 | 1.0281 | 0.0704 | 1.0281 | 1.0140 |
| No log | 0.7115 | 328 | 0.8707 | -0.1085 | 0.8707 | 0.9331 |
| No log | 0.7158 | 330 | 0.9045 | -0.1085 | 0.9045 | 0.9510 |
| No log | 0.7202 | 332 | 1.3441 | -0.0097 | 1.3441 | 1.1594 |
| No log | 0.7245 | 334 | 1.6187 | -0.0097 | 1.6187 | 1.2723 |
| No log | 0.7289 | 336 | 1.3882 | -0.0097 | 1.3882 | 1.1782 |
| No log | 0.7332 | 338 | 1.0096 | -0.1085 | 1.0096 | 1.0048 |
| No log | 0.7375 | 340 | 1.1220 | 0.0833 | 1.1220 | 1.0593 |
| No log | 0.7419 | 342 | 1.4189 | -0.0097 | 1.4189 | 1.1912 |
| No log | 0.7462 | 344 | 1.3234 | -0.0206 | 1.3234 | 1.1504 |
| No log | 0.7505 | 346 | 1.2649 | -0.0206 | 1.2649 | 1.1247 |
| No log | 0.7549 | 348 | 1.0719 | 0.0833 | 1.0719 | 1.0353 |
| No log | 0.7592 | 350 | 1.0653 | 0.0833 | 1.0653 | 1.0321 |
| No log | 0.7636 | 352 | 0.9979 | -0.0845 | 0.9979 | 0.9990 |
| No log | 0.7679 | 354 | 1.1827 | -0.0206 | 1.1827 | 1.0875 |
| No log | 0.7722 | 356 | 1.3361 | -0.0097 | 1.3361 | 1.1559 |
| No log | 0.7766 | 358 | 1.4385 | -0.0097 | 1.4385 | 1.1994 |
| No log | 0.7809 | 360 | 1.5072 | -0.0097 | 1.5072 | 1.2277 |
| No log | 0.7852 | 362 | 1.3644 | -0.0206 | 1.3644 | 1.1681 |
| No log | 0.7896 | 364 | 1.4307 | -0.0206 | 1.4307 | 1.1961 |
| No log | 0.7939 | 366 | 1.4629 | -0.0097 | 1.4629 | 1.2095 |
| No log | 0.7983 | 368 | 1.5303 | -0.0097 | 1.5303 | 1.2371 |
| No log | 0.8026 | 370 | 1.0586 | 0.0774 | 1.0586 | 1.0289 |
| No log | 0.8069 | 372 | 0.7804 | -0.0233 | 0.7804 | 0.8834 |
| No log | 0.8113 | 374 | 0.8080 | 0.0 | 0.8080 | 0.8989 |
| No log | 0.8156 | 376 | 1.1139 | 0.0774 | 1.1139 | 1.0554 |
| No log | 0.8200 | 378 | 1.7624 | -0.0097 | 1.7624 | 1.3275 |
| No log | 0.8243 | 380 | 1.7015 | -0.0097 | 1.7015 | 1.3044 |
| No log | 0.8286 | 382 | 1.1807 | -0.0206 | 1.1807 | 1.0866 |
| No log | 0.8330 | 384 | 0.8461 | 0.0 | 0.8461 | 0.9198 |
| No log | 0.8373 | 386 | 0.8355 | 0.0 | 0.8355 | 0.9141 |
| No log | 0.8416 | 388 | 0.9670 | -0.1085 | 0.9670 | 0.9834 |
| No log | 0.8460 | 390 | 1.0326 | -0.0645 | 1.0326 | 1.0162 |
| No log | 0.8503 | 392 | 1.1172 | -0.0206 | 1.1172 | 1.0570 |
| No log | 0.8547 | 394 | 1.0097 | -0.0645 | 1.0097 | 1.0048 |
| No log | 0.8590 | 396 | 0.9517 | -0.1085 | 0.9517 | 0.9756 |
| No log | 0.8633 | 398 | 0.9678 | -0.1085 | 0.9678 | 0.9838 |
| No log | 0.8677 | 400 | 1.1795 | -0.0206 | 1.1795 | 1.0861 |
| No log | 0.8720 | 402 | 1.2120 | -0.0206 | 1.2120 | 1.1009 |
| No log | 0.8764 | 404 | 1.2367 | -0.0206 | 1.2367 | 1.1121 |
| No log | 0.8807 | 406 | 0.9727 | -0.0645 | 0.9727 | 0.9863 |
| No log | 0.8850 | 408 | 0.9215 | -0.1085 | 0.9215 | 0.9599 |
| No log | 0.8894 | 410 | 1.0731 | 0.0833 | 1.0731 | 1.0359 |
| No log | 0.8937 | 412 | 1.2664 | -0.0097 | 1.2664 | 1.1253 |
| No log | 0.8980 | 414 | 1.2348 | -0.0206 | 1.2348 | 1.1112 |
| No log | 0.9024 | 416 | 1.2626 | -0.0206 | 1.2626 | 1.1237 |
| No log | 0.9067 | 418 | 1.0823 | 0.0833 | 1.0823 | 1.0403 |
| No log | 0.9111 | 420 | 1.1100 | 0.0833 | 1.1100 | 1.0536 |
| No log | 0.9154 | 422 | 1.0350 | 0.0833 | 1.0350 | 1.0174 |
| No log | 0.9197 | 424 | 1.0261 | -0.0645 | 1.0261 | 1.0130 |
| No log | 0.9241 | 426 | 0.9162 | -0.1085 | 0.9162 | 0.9572 |
| No log | 0.9284 | 428 | 0.7827 | 0.2667 | 0.7827 | 0.8847 |
| No log | 0.9328 | 430 | 0.8152 | 0.2667 | 0.8152 | 0.9029 |
| No log | 0.9371 | 432 | 0.8323 | 0.2414 | 0.8323 | 0.9123 |
| No log | 0.9414 | 434 | 0.9148 | -0.0845 | 0.9148 | 0.9564 |
| No log | 0.9458 | 436 | 0.8934 | 0.2414 | 0.8934 | 0.9452 |
| No log | 0.9501 | 438 | 0.8886 | 0.2414 | 0.8886 | 0.9427 |
| No log | 0.9544 | 440 | 0.8048 | 0.2414 | 0.8048 | 0.8971 |
| No log | 0.9588 | 442 | 0.7205 | 0.0 | 0.7205 | 0.8488 |
| No log | 0.9631 | 444 | 0.7498 | 0.0 | 0.7498 | 0.8659 |
| No log | 0.9675 | 446 | 0.9554 | -0.1085 | 0.9554 | 0.9775 |
| No log | 0.9718 | 448 | 1.4305 | -0.0097 | 1.4305 | 1.1960 |
| No log | 0.9761 | 450 | 1.7173 | 0.0 | 1.7173 | 1.3105 |
| No log | 0.9805 | 452 | 1.6161 | -0.0097 | 1.6161 | 1.2713 |
| No log | 0.9848 | 454 | 1.1899 | -0.1547 | 1.1899 | 1.0908 |
| No log | 0.9892 | 456 | 1.0239 | -0.1085 | 1.0239 | 1.0119 |
| No log | 0.9935 | 458 | 1.1341 | -0.1786 | 1.1341 | 1.0649 |
| No log | 0.9978 | 460 | 1.2042 | -0.1547 | 1.2042 | 1.0973 |
| No log | 1.0022 | 462 | 1.4485 | -0.0097 | 1.4485 | 1.2035 |
| No log | 1.0065 | 464 | 1.4071 | -0.0097 | 1.4071 | 1.1862 |
| No log | 1.0108 | 466 | 1.1786 | -0.1547 | 1.1786 | 1.0856 |
| No log | 1.0152 | 468 | 0.9571 | -0.1379 | 0.9571 | 0.9783 |
| No log | 1.0195 | 470 | 1.0033 | -0.1379 | 1.0033 | 1.0016 |
| No log | 1.0239 | 472 | 1.1087 | -0.0645 | 1.1087 | 1.0529 |
| No log | 1.0282 | 474 | 1.3032 | -0.0206 | 1.3032 | 1.1416 |
| No log | 1.0325 | 476 | 1.4221 | -0.0097 | 1.4221 | 1.1925 |
| No log | 1.0369 | 478 | 1.2143 | -0.0206 | 1.2143 | 1.1019 |
| No log | 1.0412 | 480 | 0.9098 | -0.3883 | 0.9098 | 0.9538 |
| No log | 1.0456 | 482 | 0.8054 | 0.0 | 0.8054 | 0.8975 |
| No log | 1.0499 | 484 | 0.8207 | 0.0222 | 0.8207 | 0.9059 |
| No log | 1.0542 | 486 | 0.9864 | -0.3276 | 0.9864 | 0.9932 |
| No log | 1.0586 | 488 | 1.3121 | -0.0206 | 1.3121 | 1.1455 |
| No log | 1.0629 | 490 | 1.3041 | -0.0206 | 1.3041 | 1.1420 |
| No log | 1.0672 | 492 | 1.0500 | -0.0845 | 1.0500 | 1.0247 |
| No log | 1.0716 | 494 | 0.7911 | 0.0222 | 0.7911 | 0.8894 |
| No log | 1.0759 | 496 | 0.7393 | 0.0 | 0.7393 | 0.8599 |
| No log | 1.0803 | 498 | 0.7435 | 0.0 | 0.7435 | 0.8622 |
| 0.3565 | 1.0846 | 500 | 0.8847 | 0.0388 | 0.8847 | 0.9406 |
| 0.3565 | 1.0889 | 502 | 0.9772 | -0.0845 | 0.9772 | 0.9886 |
| 0.3565 | 1.0933 | 504 | 0.9399 | -0.1085 | 0.9399 | 0.9695 |
| 0.3565 | 1.0976 | 506 | 0.8800 | -0.1085 | 0.8800 | 0.9381 |
| 0.3565 | 1.1020 | 508 | 0.9059 | -0.1085 | 0.9059 | 0.9518 |
| 0.3565 | 1.1063 | 510 | 0.9470 | 0.0704 | 0.9470 | 0.9731 |
| 0.3565 | 1.1106 | 512 | 0.8273 | -0.1379 | 0.8273 | 0.9095 |
| 0.3565 | 1.1150 | 514 | 0.8990 | -0.1085 | 0.8990 | 0.9481 |
| 0.3565 | 1.1193 | 516 | 1.2747 | -0.0206 | 1.2747 | 1.1290 |
| 0.3565 | 1.1236 | 518 | 1.4330 | -0.0097 | 1.4330 | 1.1971 |
| 0.3565 | 1.1280 | 520 | 1.5110 | -0.0097 | 1.5110 | 1.2292 |
| 0.3565 | 1.1323 | 522 | 1.2862 | -0.0097 | 1.2862 | 1.1341 |
| 0.3565 | 1.1367 | 524 | 0.9178 | -0.1379 | 0.9178 | 0.9580 |
| 0.3565 | 1.1410 | 526 | 0.8393 | -0.1379 | 0.8393 | 0.9161 |
| 0.3565 | 1.1453 | 528 | 0.9435 | -0.1085 | 0.9435 | 0.9713 |
| 0.3565 | 1.1497 | 530 | 1.2124 | -0.0097 | 1.2124 | 1.1011 |
| 0.3565 | 1.1540 | 532 | 1.2891 | -0.0097 | 1.2891 | 1.1354 |
| 0.3565 | 1.1584 | 534 | 1.1993 | -0.0097 | 1.1993 | 1.0951 |
| 0.3565 | 1.1627 | 536 | 1.1898 | -0.0206 | 1.1898 | 1.0908 |
| 0.3565 | 1.1670 | 538 | 1.0536 | -0.1085 | 1.0536 | 1.0264 |
| 0.3565 | 1.1714 | 540 | 1.0128 | -0.1085 | 1.0128 | 1.0064 |
| 0.3565 | 1.1757 | 542 | 0.9398 | -0.1379 | 0.9398 | 0.9695 |
| 0.3565 | 1.1800 | 544 | 1.1550 | 0.0774 | 1.1550 | 1.0747 |
| 0.3565 | 1.1844 | 546 | 1.1614 | 0.0833 | 1.1614 | 1.0777 |
| 0.3565 | 1.1887 | 548 | 1.0559 | -0.0845 | 1.0559 | 1.0276 |
| 0.3565 | 1.1931 | 550 | 1.0863 | -0.0845 | 1.0863 | 1.0422 |
| 0.3565 | 1.1974 | 552 | 1.1017 | -0.0645 | 1.1017 | 1.0496 |
| 0.3565 | 1.2017 | 554 | 1.2607 | -0.0097 | 1.2607 | 1.1228 |
| 0.3565 | 1.2061 | 556 | 1.2857 | -0.0097 | 1.2857 | 1.1339 |
| 0.3565 | 1.2104 | 558 | 1.1873 | -0.0476 | 1.1873 | 1.0896 |
| 0.3565 | 1.2148 | 560 | 1.1625 | -0.0645 | 1.1625 | 1.0782 |
| 0.3565 | 1.2191 | 562 | 1.2623 | -0.0645 | 1.2623 | 1.1235 |
| 0.3565 | 1.2234 | 564 | 1.4203 | -0.0097 | 1.4203 | 1.1918 |
| 0.3565 | 1.2278 | 566 | 1.4948 | -0.0097 | 1.4948 | 1.2226 |
| 0.3565 | 1.2321 | 568 | 1.3315 | -0.0097 | 1.3315 | 1.1539 |
| 0.3565 | 1.2364 | 570 | 1.1119 | -0.0645 | 1.1119 | 1.0545 |
| 0.3565 | 1.2408 | 572 | 1.0944 | -0.0845 | 1.0944 | 1.0461 |
| 0.3565 | 1.2451 | 574 | 1.1597 | -0.0476 | 1.1597 | 1.0769 |
| 0.3565 | 1.2495 | 576 | 1.3932 | -0.0097 | 1.3932 | 1.1804 |
| 0.3565 | 1.2538 | 578 | 1.3756 | -0.0097 | 1.3756 | 1.1728 |
| 0.3565 | 1.2581 | 580 | 1.2218 | -0.0097 | 1.2218 | 1.1054 |
| 0.3565 | 1.2625 | 582 | 1.1825 | -0.0331 | 1.1825 | 1.0874 |
| 0.3565 | 1.2668 | 584 | 0.9621 | -0.1085 | 0.9621 | 0.9808 |
| 0.3565 | 1.2711 | 586 | 0.8946 | -0.3276 | 0.8946 | 0.9458 |
| 0.3565 | 1.2755 | 588 | 1.0332 | -0.0845 | 1.0332 | 1.0165 |
| 0.3565 | 1.2798 | 590 | 1.2843 | -0.0097 | 1.2843 | 1.1333 |
| 0.3565 | 1.2842 | 592 | 1.2494 | -0.0097 | 1.2494 | 1.1178 |
| 0.3565 | 1.2885 | 594 | 0.9972 | -0.0845 | 0.9972 | 0.9986 |
| 0.3565 | 1.2928 | 596 | 0.9671 | -0.1085 | 0.9671 | 0.9834 |
| 0.3565 | 1.2972 | 598 | 1.1884 | -0.0097 | 1.1884 | 1.0901 |
| 0.3565 | 1.3015 | 600 | 1.4018 | 0.0 | 1.4018 | 1.1840 |
| 0.3565 | 1.3059 | 602 | 1.2831 | -0.0097 | 1.2831 | 1.1327 |
| 0.3565 | 1.3102 | 604 | 0.9161 | -0.1085 | 0.9161 | 0.9571 |
| 0.3565 | 1.3145 | 606 | 0.6359 | 0.0 | 0.6359 | 0.7974 |
| 0.3565 | 1.3189 | 608 | 0.6108 | 0.0 | 0.6108 | 0.7815 |
| 0.3565 | 1.3232 | 610 | 0.6507 | 0.0 | 0.6507 | 0.8066 |
| 0.3565 | 1.3275 | 612 | 0.8850 | -0.1085 | 0.8850 | 0.9408 |
| 0.3565 | 1.3319 | 614 | 1.1949 | -0.0097 | 1.1949 | 1.0931 |
| 0.3565 | 1.3362 | 616 | 1.2233 | -0.0097 | 1.2233 | 1.1060 |
| 0.3565 | 1.3406 | 618 | 0.9858 | -0.0845 | 0.9858 | 0.9929 |
| 0.3565 | 1.3449 | 620 | 0.7759 | 0.0 | 0.7759 | 0.8809 |
| 0.3565 | 1.3492 | 622 | 0.7823 | -0.0233 | 0.7823 | 0.8845 |
| 0.3565 | 1.3536 | 624 | 0.9610 | -0.0845 | 0.9610 | 0.9803 |
| 0.3565 | 1.3579 | 626 | 1.4623 | -0.0097 | 1.4623 | 1.2093 |
| 0.3565 | 1.3623 | 628 | 1.8708 | 0.0 | 1.8708 | 1.3678 |
| 0.3565 | 1.3666 | 630 | 1.8804 | 0.0 | 1.8804 | 1.3713 |
| 0.3565 | 1.3709 | 632 | 1.6890 | 0.0 | 1.6890 | 1.2996 |
| 0.3565 | 1.3753 | 634 | 1.3530 | -0.0097 | 1.3530 | 1.1632 |
| 0.3565 | 1.3796 | 636 | 1.0058 | -0.0845 | 1.0058 | 1.0029 |
| 0.3565 | 1.3839 | 638 | 0.8880 | -0.0233 | 0.8880 | 0.9423 |
| 0.3565 | 1.3883 | 640 | 0.8838 | -0.0233 | 0.8838 | 0.9401 |
| 0.3565 | 1.3926 | 642 | 0.9448 | -0.1379 | 0.9448 | 0.9720 |
| 0.3565 | 1.3970 | 644 | 1.1324 | -0.1786 | 1.1324 | 1.0641 |
| 0.3565 | 1.4013 | 646 | 1.4073 | -0.0097 | 1.4073 | 1.1863 |
| 0.3565 | 1.4056 | 648 | 1.4906 | -0.0097 | 1.4906 | 1.2209 |
| 0.3565 | 1.4100 | 650 | 1.3317 | -0.1340 | 1.3317 | 1.1540 |
| 0.3565 | 1.4143 | 652 | 1.0507 | -0.0845 | 1.0507 | 1.0250 |
| 0.3565 | 1.4187 | 654 | 0.9218 | -0.4667 | 0.9218 | 0.9601 |
| 0.3565 | 1.4230 | 656 | 0.8943 | -0.4667 | 0.8943 | 0.9457 |
| 0.3565 | 1.4273 | 658 | 0.9607 | -0.1085 | 0.9607 | 0.9801 |
| 0.3565 | 1.4317 | 660 | 1.1709 | -0.1786 | 1.1709 | 1.0821 |
| 0.3565 | 1.4360 | 662 | 1.2784 | -0.1786 | 1.2784 | 1.1307 |
| 0.3565 | 1.4403 | 664 | 1.1723 | -0.1786 | 1.1723 | 1.0827 |
| 0.3565 | 1.4447 | 666 | 0.9754 | -0.1085 | 0.9754 | 0.9876 |
| 0.3565 | 1.4490 | 668 | 0.8423 | 0.0 | 0.8423 | 0.9178 |
| 0.3565 | 1.4534 | 670 | 0.8268 | 0.0 | 0.8268 | 0.9093 |
| 0.3565 | 1.4577 | 672 | 0.8936 | -0.4667 | 0.8936 | 0.9453 |
| 0.3565 | 1.4620 | 674 | 1.0065 | -0.1085 | 1.0065 | 1.0032 |
| 0.3565 | 1.4664 | 676 | 1.0469 | -0.1085 | 1.0469 | 1.0232 |
| 0.3565 | 1.4707 | 678 | 0.9450 | -0.3276 | 0.9450 | 0.9721 |
| 0.3565 | 1.4751 | 680 | 0.8475 | 0.0 | 0.8475 | 0.9206 |
| 0.3565 | 1.4794 | 682 | 0.8920 | -0.4667 | 0.8920 | 0.9444 |
| 0.3565 | 1.4837 | 684 | 1.0105 | -0.1085 | 1.0105 | 1.0052 |
| 0.3565 | 1.4881 | 686 | 1.2240 | -0.1547 | 1.2240 | 1.1064 |
| 0.3565 | 1.4924 | 688 | 1.2480 | -0.1547 | 1.2480 | 1.1171 |
| 0.3565 | 1.4967 | 690 | 1.1877 | -0.1547 | 1.1877 | 1.0898 |
| 0.3565 | 1.5011 | 692 | 1.1072 | -0.0845 | 1.1072 | 1.0522 |
| 0.3565 | 1.5054 | 694 | 1.0791 | -0.0845 | 1.0791 | 1.0388 |
| 0.3565 | 1.5098 | 696 | 1.1233 | -0.0476 | 1.1233 | 1.0598 |
| 0.3565 | 1.5141 | 698 | 1.0412 | -0.0845 | 1.0412 | 1.0204 |
| 0.3565 | 1.5184 | 700 | 0.9640 | -0.3883 | 0.9640 | 0.9818 |
| 0.3565 | 1.5228 | 702 | 1.0194 | -0.0845 | 1.0194 | 1.0097 |
| 0.3565 | 1.5271 | 704 | 1.1205 | -0.0476 | 1.1205 | 1.0585 |
| 0.3565 | 1.5315 | 706 | 1.1237 | -0.0476 | 1.1237 | 1.0601 |
| 0.3565 | 1.5358 | 708 | 1.0189 | -0.0845 | 1.0189 | 1.0094 |
| 0.3565 | 1.5401 | 710 | 0.9081 | -0.3883 | 0.9081 | 0.9530 |
| 0.3565 | 1.5445 | 712 | 0.9146 | -0.3883 | 0.9146 | 0.9563 |
| 0.3565 | 1.5488 | 714 | 0.9978 | -0.1085 | 0.9978 | 0.9989 |
| 0.3565 | 1.5531 | 716 | 1.0584 | -0.0645 | 1.0584 | 1.0288 |
| 0.3565 | 1.5575 | 718 | 1.1822 | -0.1547 | 1.1822 | 1.0873 |
| 0.3565 | 1.5618 | 720 | 1.1425 | -0.0476 | 1.1425 | 1.0689 |
| 0.3565 | 1.5662 | 722 | 1.0202 | -0.1085 | 1.0202 | 1.0101 |
| 0.3565 | 1.5705 | 724 | 0.9700 | -0.1379 | 0.9700 | 0.9849 |
| 0.3565 | 1.5748 | 726 | 1.0739 | -0.0845 | 1.0739 | 1.0363 |
| 0.3565 | 1.5792 | 728 | 1.1155 | -0.0645 | 1.1155 | 1.0562 |
| 0.3565 | 1.5835 | 730 | 1.0329 | -0.0845 | 1.0329 | 1.0163 |
| 0.3565 | 1.5879 | 732 | 1.0676 | -0.0845 | 1.0676 | 1.0333 |
| 0.3565 | 1.5922 | 734 | 1.0434 | -0.0845 | 1.0434 | 1.0215 |
| 0.3565 | 1.5965 | 736 | 0.9042 | -0.3883 | 0.9042 | 0.9509 |
| 0.3565 | 1.6009 | 738 | 0.8371 | 0.0 | 0.8371 | 0.9149 |
| 0.3565 | 1.6052 | 740 | 0.8631 | 0.0222 | 0.8631 | 0.9291 |
| 0.3565 | 1.6095 | 742 | 1.0648 | -0.0845 | 1.0648 | 1.0319 |
| 0.3565 | 1.6139 | 744 | 1.3811 | -0.1547 | 1.3811 | 1.1752 |
| 0.3565 | 1.6182 | 746 | 1.3722 | -0.1547 | 1.3722 | 1.1714 |
| 0.3565 | 1.6226 | 748 | 1.0955 | -0.0645 | 1.0955 | 1.0467 |
| 0.3565 | 1.6269 | 750 | 0.8314 | -0.0233 | 0.8314 | 0.9118 |
| 0.3565 | 1.6312 | 752 | 0.8235 | -0.0233 | 0.8235 | 0.9075 |
| 0.3565 | 1.6356 | 754 | 0.8283 | -0.0233 | 0.8283 | 0.9101 |
| 0.3565 | 1.6399 | 756 | 0.8913 | -0.3883 | 0.8913 | 0.9441 |
| 0.3565 | 1.6443 | 758 | 1.0161 | -0.1085 | 1.0161 | 1.0080 |
| 0.3565 | 1.6486 | 760 | 0.9575 | -0.3276 | 0.9575 | 0.9785 |
| 0.3565 | 1.6529 | 762 | 0.8446 | 0.0 | 0.8446 | 0.9190 |
| 0.3565 | 1.6573 | 764 | 0.8126 | -0.0233 | 0.8126 | 0.9015 |
| 0.3565 | 1.6616 | 766 | 0.8146 | -0.0233 | 0.8146 | 0.9026 |
| 0.3565 | 1.6659 | 768 | 0.8276 | 0.0 | 0.8276 | 0.9097 |
| 0.3565 | 1.6703 | 770 | 0.8101 | 0.0 | 0.8101 | 0.9000 |
| 0.3565 | 1.6746 | 772 | 0.7891 | -0.0421 | 0.7891 | 0.8883 |
| 0.3565 | 1.6790 | 774 | 0.7625 | 0.0 | 0.7625 | 0.8732 |
| 0.3565 | 1.6833 | 776 | 0.7480 | -0.0233 | 0.7480 | 0.8649 |
| 0.3565 | 1.6876 | 778 | 0.7326 | 0.0 | 0.7326 | 0.8559 |
| 0.3565 | 1.6920 | 780 | 0.7839 | 0.0222 | 0.7839 | 0.8854 |
| 0.3565 | 1.6963 | 782 | 0.8544 | 0.0388 | 0.8544 | 0.9243 |
| 0.3565 | 1.7007 | 784 | 0.8095 | 0.0222 | 0.8095 | 0.8997 |
| 0.3565 | 1.7050 | 786 | 0.6807 | 0.0 | 0.6807 | 0.8250 |
| 0.3565 | 1.7093 | 788 | 0.6211 | 0.0 | 0.6211 | 0.7881 |
| 0.3565 | 1.7137 | 790 | 0.6141 | 0.0 | 0.6141 | 0.7837 |
| 0.3565 | 1.7180 | 792 | 0.6193 | 0.0 | 0.6193 | 0.7870 |
| 0.3565 | 1.7223 | 794 | 0.6695 | 0.0 | 0.6695 | 0.8183 |
| 0.3565 | 1.7267 | 796 | 0.7284 | 0.0 | 0.7284 | 0.8535 |
| 0.3565 | 1.7310 | 798 | 0.7241 | 0.0 | 0.7241 | 0.8509 |
| 0.3565 | 1.7354 | 800 | 0.7381 | 0.0 | 0.7381 | 0.8591 |
| 0.3565 | 1.7397 | 802 | 0.7140 | 0.0 | 0.7140 | 0.8450 |
| 0.3565 | 1.7440 | 804 | 0.7023 | 0.0 | 0.7023 | 0.8380 |
| 0.3565 | 1.7484 | 806 | 0.7513 | 0.0 | 0.7513 | 0.8668 |
| 0.3565 | 1.7527 | 808 | 0.8817 | -0.1379 | 0.8817 | 0.9390 |
| 0.3565 | 1.7570 | 810 | 1.0829 | -0.0476 | 1.0829 | 1.0406 |
| 0.3565 | 1.7614 | 812 | 1.0942 | -0.0476 | 1.0942 | 1.0461 |
| 0.3565 | 1.7657 | 814 | 0.9286 | -0.1379 | 0.9286 | 0.9636 |
| 0.3565 | 1.7701 | 816 | 0.8321 | 0.0 | 0.8321 | 0.9122 |
| 0.3565 | 1.7744 | 818 | 0.7601 | 0.0 | 0.7601 | 0.8719 |
| 0.3565 | 1.7787 | 820 | 0.7621 | -0.0233 | 0.7621 | 0.8730 |
| 0.3565 | 1.7831 | 822 | 0.8331 | 0.0 | 0.8331 | 0.9127 |
| 0.3565 | 1.7874 | 824 | 0.9920 | -0.1085 | 0.9920 | 0.9960 |
| 0.3565 | 1.7918 | 826 | 0.9913 | -0.1085 | 0.9913 | 0.9956 |
| 0.3565 | 1.7961 | 828 | 0.9742 | -0.1085 | 0.9742 | 0.9870 |
| 0.3565 | 1.8004 | 830 | 0.9623 | -0.1085 | 0.9623 | 0.9810 |
| 0.3565 | 1.8048 | 832 | 0.9026 | -0.1085 | 0.9026 | 0.9501 |
| 0.3565 | 1.8091 | 834 | 0.9173 | -0.1085 | 0.9173 | 0.9578 |
| 0.3565 | 1.8134 | 836 | 0.8297 | -0.1748 | 0.8297 | 0.9109 |
| 0.3565 | 1.8178 | 838 | 0.8509 | -0.1748 | 0.8509 | 0.9224 |
| 0.3565 | 1.8221 | 840 | 0.8056 | 0.2667 | 0.8056 | 0.8975 |
| 0.3565 | 1.8265 | 842 | 0.7643 | 0.2667 | 0.7643 | 0.8743 |
| 0.3565 | 1.8308 | 844 | 0.8550 | -0.1379 | 0.8550 | 0.9246 |
| 0.3565 | 1.8351 | 846 | 0.8426 | -0.1748 | 0.8426 | 0.9180 |
| 0.3565 | 1.8395 | 848 | 0.9608 | -0.0845 | 0.9608 | 0.9802 |
| 0.3565 | 1.8438 | 850 | 0.9801 | -0.0845 | 0.9801 | 0.9900 |
| 0.3565 | 1.8482 | 852 | 0.8307 | 0.2667 | 0.8307 | 0.9114 |
| 0.3565 | 1.8525 | 854 | 0.7664 | 0.2667 | 0.7664 | 0.8755 |
| 0.3565 | 1.8568 | 856 | 0.8855 | -0.1379 | 0.8855 | 0.9410 |
| 0.3565 | 1.8612 | 858 | 1.0829 | -0.0645 | 1.0829 | 1.0406 |
| 0.3565 | 1.8655 | 860 | 1.1194 | -0.1547 | 1.1194 | 1.0580 |
| 0.3565 | 1.8698 | 862 | 1.0990 | -0.1786 | 1.0990 | 1.0483 |
| 0.3565 | 1.8742 | 864 | 1.0013 | -0.0845 | 1.0013 | 1.0006 |
| 0.3565 | 1.8785 | 866 | 0.9126 | -0.1085 | 0.9126 | 0.9553 |
| 0.3565 | 1.8829 | 868 | 0.8240 | 0.2667 | 0.8240 | 0.9077 |
| 0.3565 | 1.8872 | 870 | 0.8947 | -0.1379 | 0.8947 | 0.9459 |
| 0.3565 | 1.8915 | 872 | 1.1059 | -0.0645 | 1.1059 | 1.0516 |
| 0.3565 | 1.8959 | 874 | 1.1009 | -0.0645 | 1.1009 | 1.0492 |
| 0.3565 | 1.9002 | 876 | 0.9601 | -0.1085 | 0.9601 | 0.9799 |
| 0.3565 | 1.9046 | 878 | 0.9454 | -0.1085 | 0.9454 | 0.9723 |
| 0.3565 | 1.9089 | 880 | 0.9077 | -0.1379 | 0.9077 | 0.9527 |
| 0.3565 | 1.9132 | 882 | 0.8609 | -0.1379 | 0.8609 | 0.9279 |
| 0.3565 | 1.9176 | 884 | 0.7762 | 0.2667 | 0.7762 | 0.8810 |
| 0.3565 | 1.9219 | 886 | 0.7504 | 0.2667 | 0.7504 | 0.8663 |
| 0.3565 | 1.9262 | 888 | 0.8559 | -0.1379 | 0.8559 | 0.9251 |
| 0.3565 | 1.9306 | 890 | 1.1076 | 0.0884 | 1.1076 | 1.0524 |
| 0.3565 | 1.9349 | 892 | 1.0588 | -0.0645 | 1.0588 | 1.0290 |
| 0.3565 | 1.9393 | 894 | 0.8856 | -0.1085 | 0.8856 | 0.9411 |
| 0.3565 | 1.9436 | 896 | 0.7334 | 0.2667 | 0.7334 | 0.8564 |
| 0.3565 | 1.9479 | 898 | 0.7163 | -0.0233 | 0.7163 | 0.8463 |
| 0.3565 | 1.9523 | 900 | 0.7181 | -0.0233 | 0.7181 | 0.8474 |
| 0.3565 | 1.9566 | 902 | 0.8519 | -0.1379 | 0.8519 | 0.9230 |
| 0.3565 | 1.9610 | 904 | 0.9699 | -0.1085 | 0.9699 | 0.9848 |
| 0.3565 | 1.9653 | 906 | 0.9044 | -0.1379 | 0.9044 | 0.9510 |
| 0.3565 | 1.9696 | 908 | 0.8281 | -0.1379 | 0.8281 | 0.9100 |
| 0.3565 | 1.9740 | 910 | 0.7533 | 0.2667 | 0.7533 | 0.8679 |
| 0.3565 | 1.9783 | 912 | 0.7093 | 0.0 | 0.7093 | 0.8422 |
| 0.3565 | 1.9826 | 914 | 0.7837 | 0.2667 | 0.7837 | 0.8853 |
| 0.3565 | 1.9870 | 916 | 0.9010 | -0.1085 | 0.9010 | 0.9492 |
| 0.3565 | 1.9913 | 918 | 1.1934 | -0.0331 | 1.1934 | 1.0924 |
| 0.3565 | 1.9957 | 920 | 1.1619 | -0.0645 | 1.1619 | 1.0779 |
| 0.3565 | 2.0 | 922 | 0.8957 | -0.1379 | 0.8957 | 0.9464 |
| 0.3565 | 2.0043 | 924 | 0.7776 | -0.0233 | 0.7776 | 0.8818 |
| 0.3565 | 2.0087 | 926 | 0.8034 | -0.0233 | 0.8034 | 0.8963 |
| 0.3565 | 2.0130 | 928 | 0.9800 | -0.1085 | 0.9800 | 0.9899 |
| 0.3565 | 2.0174 | 930 | 1.1847 | -0.0845 | 1.1847 | 1.0885 |
| 0.3565 | 2.0217 | 932 | 1.2347 | -0.1786 | 1.2347 | 1.1111 |
| 0.3565 | 2.0260 | 934 | 1.1708 | -0.2065 | 1.1708 | 1.0820 |
| 0.3565 | 2.0304 | 936 | 0.9788 | -0.0845 | 0.9788 | 0.9893 |
| 0.3565 | 2.0347 | 938 | 0.8995 | -0.1748 | 0.8995 | 0.9484 |
| 0.3565 | 2.0390 | 940 | 0.9337 | -0.1379 | 0.9337 | 0.9663 |
| 0.3565 | 2.0434 | 942 | 0.9820 | -0.0845 | 0.9820 | 0.9909 |
| 0.3565 | 2.0477 | 944 | 0.9380 | -0.1379 | 0.9380 | 0.9685 |
| 0.3565 | 2.0521 | 946 | 0.8469 | -0.4667 | 0.8469 | 0.9203 |
| 0.3565 | 2.0564 | 948 | 0.8451 | 0.0 | 0.8451 | 0.9193 |
| 0.3565 | 2.0607 | 950 | 0.8987 | -0.1748 | 0.8987 | 0.9480 |
| 0.3565 | 2.0651 | 952 | 0.9205 | -0.1379 | 0.9205 | 0.9594 |
| 0.3565 | 2.0694 | 954 | 0.8628 | -0.1748 | 0.8628 | 0.9289 |
| 0.3565 | 2.0738 | 956 | 0.8766 | -0.1748 | 0.8766 | 0.9363 |
| 0.3565 | 2.0781 | 958 | 0.9642 | -0.1379 | 0.9642 | 0.9819 |
| 0.3565 | 2.0824 | 960 | 0.8924 | -0.1748 | 0.8924 | 0.9447 |
| 0.3565 | 2.0868 | 962 | 0.8917 | -0.1748 | 0.8917 | 0.9443 |
| 0.3565 | 2.0911 | 964 | 0.9993 | -0.1085 | 0.9993 | 0.9997 |
| 0.3565 | 2.0954 | 966 | 1.0963 | -0.0845 | 1.0963 | 1.0471 |
| 0.3565 | 2.0998 | 968 | 1.0526 | -0.0845 | 1.0526 | 1.0260 |
| 0.3565 | 2.1041 | 970 | 0.9075 | -0.4667 | 0.9075 | 0.9526 |
| 0.3565 | 2.1085 | 972 | 0.8740 | 0.0 | 0.8740 | 0.9349 |
| 0.3565 | 2.1128 | 974 | 0.9042 | -0.4667 | 0.9042 | 0.9509 |
| 0.3565 | 2.1171 | 976 | 0.9488 | -0.1748 | 0.9488 | 0.9741 |
| 0.3565 | 2.1215 | 978 | 1.0742 | -0.0845 | 1.0742 | 1.0364 |
| 0.3565 | 2.1258 | 980 | 1.1035 | -0.0845 | 1.1035 | 1.0505 |
| 0.3565 | 2.1302 | 982 | 1.1369 | -0.0845 | 1.1369 | 1.0663 |
| 0.3565 | 2.1345 | 984 | 1.0774 | -0.0845 | 1.0774 | 1.0380 |
| 0.3565 | 2.1388 | 986 | 1.1307 | -0.0845 | 1.1307 | 1.0633 |
| 0.3565 | 2.1432 | 988 | 1.2363 | -0.1786 | 1.2363 | 1.1119 |
| 0.3565 | 2.1475 | 990 | 1.1652 | -0.2065 | 1.1652 | 1.0794 |
| 0.3565 | 2.1518 | 992 | 0.9717 | -0.1085 | 0.9717 | 0.9858 |
| 0.3565 | 2.1562 | 994 | 0.9052 | -0.4667 | 0.9052 | 0.9514 |
| 0.3565 | 2.1605 | 996 | 0.9267 | -0.1379 | 0.9267 | 0.9627 |
| 0.3565 | 2.1649 | 998 | 0.9712 | -0.1085 | 0.9712 | 0.9855 |
| 0.1294 | 2.1692 | 1000 | 0.9731 | -0.1085 | 0.9731 | 0.9865 |
| 0.1294 | 2.1735 | 1002 | 0.9668 | -0.1085 | 0.9668 | 0.9833 |
| 0.1294 | 2.1779 | 1004 | 0.9704 | -0.1085 | 0.9704 | 0.9851 |
| 0.1294 | 2.1822 | 1006 | 0.9216 | -0.1748 | 0.9216 | 0.9600 |
| 0.1294 | 2.1866 | 1008 | 0.8840 | 0.0 | 0.8840 | 0.9402 |
| 0.1294 | 2.1909 | 1010 | 0.8426 | 0.0 | 0.8426 | 0.9180 |
| 0.1294 | 2.1952 | 1012 | 0.9055 | -0.1748 | 0.9055 | 0.9516 |
| 0.1294 | 2.1996 | 1014 | 0.9871 | -0.1085 | 0.9871 | 0.9935 |
| 0.1294 | 2.2039 | 1016 | 1.0155 | -0.1085 | 1.0155 | 1.0077 |
| 0.1294 | 2.2082 | 1018 | 0.9137 | -0.1379 | 0.9137 | 0.9559 |
| 0.1294 | 2.2126 | 1020 | 0.8282 | 0.0 | 0.8282 | 0.9101 |
| 0.1294 | 2.2169 | 1022 | 0.8860 | -0.1379 | 0.8860 | 0.9413 |
| 0.1294 | 2.2213 | 1024 | 0.9730 | -0.1085 | 0.9730 | 0.9864 |
| 0.1294 | 2.2256 | 1026 | 1.0642 | -0.1085 | 1.0642 | 1.0316 |
| 0.1294 | 2.2299 | 1028 | 0.9709 | -0.1085 | 0.9709 | 0.9853 |
| 0.1294 | 2.2343 | 1030 | 0.8583 | 0.0222 | 0.8583 | 0.9264 |
| 0.1294 | 2.2386 | 1032 | 0.8330 | 0.0222 | 0.8330 | 0.9127 |
| 0.1294 | 2.2430 | 1034 | 0.8932 | -0.1379 | 0.8932 | 0.9451 |
| 0.1294 | 2.2473 | 1036 | 0.9335 | -0.1085 | 0.9335 | 0.9662 |
| 0.1294 | 2.2516 | 1038 | 0.8972 | -0.1379 | 0.8972 | 0.9472 |
| 0.1294 | 2.2560 | 1040 | 0.8722 | -0.1748 | 0.8722 | 0.9339 |
| 0.1294 | 2.2603 | 1042 | 0.9621 | -0.1085 | 0.9621 | 0.9809 |
| 0.1294 | 2.2646 | 1044 | 1.0792 | -0.1085 | 1.0792 | 1.0388 |
| 0.1294 | 2.2690 | 1046 | 1.0557 | -0.1085 | 1.0557 | 1.0275 |
| 0.1294 | 2.2733 | 1048 | 0.9694 | -0.1786 | 0.9694 | 0.9846 |
| 0.1294 | 2.2777 | 1050 | 0.9696 | -0.1786 | 0.9696 | 0.9847 |
| 0.1294 | 2.2820 | 1052 | 1.0594 | -0.1085 | 1.0594 | 1.0293 |
| 0.1294 | 2.2863 | 1054 | 1.1515 | -0.2394 | 1.1515 | 1.0731 |
| 0.1294 | 2.2907 | 1056 | 1.0261 | -0.1085 | 1.0261 | 1.0130 |
| 0.1294 | 2.2950 | 1058 | 0.8928 | -0.1379 | 0.8928 | 0.9449 |
| 0.1294 | 2.2993 | 1060 | 0.8653 | -0.1748 | 0.8653 | 0.9302 |
| 0.1294 | 2.3037 | 1062 | 0.8581 | -0.1748 | 0.8581 | 0.9264 |
| 0.1294 | 2.3080 | 1064 | 0.9250 | -0.1085 | 0.9250 | 0.9617 |
| 0.1294 | 2.3124 | 1066 | 0.9524 | -0.1085 | 0.9524 | 0.9759 |
| 0.1294 | 2.3167 | 1068 | 0.9141 | -0.1085 | 0.9141 | 0.9561 |
| 0.1294 | 2.3210 | 1070 | 0.8805 | -0.1748 | 0.8805 | 0.9384 |
| 0.1294 | 2.3254 | 1072 | 0.8769 | -0.1748 | 0.8769 | 0.9364 |
| 0.1294 | 2.3297 | 1074 | 0.8333 | 0.0 | 0.8333 | 0.9129 |
| 0.1294 | 2.3341 | 1076 | 0.8894 | -0.1748 | 0.8894 | 0.9431 |
| 0.1294 | 2.3384 | 1078 | 0.9085 | -0.1379 | 0.9085 | 0.9532 |
| 0.1294 | 2.3427 | 1080 | 0.9958 | -0.1085 | 0.9958 | 0.9979 |
| 0.1294 | 2.3471 | 1082 | 0.9917 | -0.1085 | 0.9917 | 0.9958 |
| 0.1294 | 2.3514 | 1084 | 0.9060 | -0.1440 | 0.9060 | 0.9518 |
| 0.1294 | 2.3557 | 1086 | 0.8915 | -0.1379 | 0.8915 | 0.9442 |
| 0.1294 | 2.3601 | 1088 | 0.9537 | -0.1085 | 0.9537 | 0.9766 |
| 0.1294 | 2.3644 | 1090 | 0.9069 | -0.1085 | 0.9069 | 0.9523 |
| 0.1294 | 2.3688 | 1092 | 0.8461 | 0.2667 | 0.8461 | 0.9198 |
| 0.1294 | 2.3731 | 1094 | 0.8795 | -0.1379 | 0.8795 | 0.9378 |
| 0.1294 | 2.3774 | 1096 | 0.9203 | -0.1085 | 0.9203 | 0.9593 |
| 0.1294 | 2.3818 | 1098 | 0.8677 | -0.1379 | 0.8677 | 0.9315 |
| 0.1294 | 2.3861 | 1100 | 0.9068 | -0.1379 | 0.9068 | 0.9523 |
| 0.1294 | 2.3905 | 1102 | 0.8125 | 0.2222 | 0.8125 | 0.9014 |
| 0.1294 | 2.3948 | 1104 | 0.7822 | -0.0233 | 0.7822 | 0.8844 |
| 0.1294 | 2.3991 | 1106 | 0.7775 | -0.0233 | 0.7775 | 0.8818 |
| 0.1294 | 2.4035 | 1108 | 0.8418 | 0.2524 | 0.8418 | 0.9175 |
| 0.1294 | 2.4078 | 1110 | 1.0500 | -0.1085 | 1.0500 | 1.0247 |
| 0.1294 | 2.4121 | 1112 | 1.1210 | -0.0645 | 1.1210 | 1.0588 |
| 0.1294 | 2.4165 | 1114 | 0.9788 | -0.1085 | 0.9788 | 0.9893 |
| 0.1294 | 2.4208 | 1116 | 0.8464 | 0.2524 | 0.8464 | 0.9200 |
| 0.1294 | 2.4252 | 1118 | 0.8620 | -0.1379 | 0.8620 | 0.9285 |
| 0.1294 | 2.4295 | 1120 | 0.8787 | -0.1379 | 0.8787 | 0.9374 |
| 0.1294 | 2.4338 | 1122 | 0.9961 | -0.1085 | 0.9961 | 0.9981 |
| 0.1294 | 2.4382 | 1124 | 1.1814 | -0.1547 | 1.1814 | 1.0869 |
| 0.1294 | 2.4425 | 1126 | 1.1605 | -0.1547 | 1.1605 | 1.0773 |
| 0.1294 | 2.4469 | 1128 | 0.9985 | -0.1085 | 0.9985 | 0.9993 |
| 0.1294 | 2.4512 | 1130 | 0.9388 | -0.1379 | 0.9388 | 0.9689 |
| 0.1294 | 2.4555 | 1132 | 1.0021 | -0.1085 | 1.0021 | 1.0010 |
| 0.1294 | 2.4599 | 1134 | 1.0069 | -0.1085 | 1.0069 | 1.0034 |
| 0.1294 | 2.4642 | 1136 | 0.9278 | -0.1379 | 0.9278 | 0.9632 |
| 0.1294 | 2.4685 | 1138 | 0.9098 | -0.1379 | 0.9098 | 0.9538 |
| 0.1294 | 2.4729 | 1140 | 0.8393 | 0.2524 | 0.8393 | 0.9161 |
| 0.1294 | 2.4772 | 1142 | 0.8939 | 0.2524 | 0.8939 | 0.9455 |
| 0.1294 | 2.4816 | 1144 | 1.0018 | -0.1379 | 1.0018 | 1.0009 |
| 0.1294 | 2.4859 | 1146 | 0.9573 | -0.1379 | 0.9573 | 0.9784 |
| 0.1294 | 2.4902 | 1148 | 0.8183 | 0.2524 | 0.8183 | 0.9046 |
| 0.1294 | 2.4946 | 1150 | 0.7542 | 0.2222 | 0.7542 | 0.8685 |
| 0.1294 | 2.4989 | 1152 | 0.7850 | 0.2524 | 0.7850 | 0.8860 |
| 0.1294 | 2.5033 | 1154 | 0.8740 | 0.2524 | 0.8740 | 0.9349 |
| 0.1294 | 2.5076 | 1156 | 0.8629 | 0.2524 | 0.8629 | 0.9289 |
| 0.1294 | 2.5119 | 1158 | 0.8164 | 0.2524 | 0.8164 | 0.9035 |
| 0.1294 | 2.5163 | 1160 | 0.7629 | 0.2667 | 0.7629 | 0.8734 |
| 0.1294 | 2.5206 | 1162 | 0.7815 | 0.2667 | 0.7815 | 0.8840 |
| 0.1294 | 2.5249 | 1164 | 0.7630 | 0.2667 | 0.7630 | 0.8735 |
| 0.1294 | 2.5293 | 1166 | 0.8132 | 0.2667 | 0.8132 | 0.9018 |
| 0.1294 | 2.5336 | 1168 | 0.8564 | 0.2524 | 0.8564 | 0.9254 |
| 0.1294 | 2.5380 | 1170 | 0.8388 | 0.2667 | 0.8388 | 0.9159 |
| 0.1294 | 2.5423 | 1172 | 0.7894 | 0.0 | 0.7894 | 0.8885 |
| 0.1294 | 2.5466 | 1174 | 0.7635 | 0.0 | 0.7635 | 0.8738 |
| 0.1294 | 2.5510 | 1176 | 0.7890 | 0.0 | 0.7890 | 0.8883 |
| 0.1294 | 2.5553 | 1178 | 0.8535 | 0.2667 | 0.8535 | 0.9238 |
| 0.1294 | 2.5597 | 1180 | 0.8728 | 0.2524 | 0.8728 | 0.9343 |
| 0.1294 | 2.5640 | 1182 | 0.7860 | 0.0 | 0.7860 | 0.8865 |
| 0.1294 | 2.5683 | 1184 | 0.7285 | -0.0233 | 0.7285 | 0.8535 |
| 0.1294 | 2.5727 | 1186 | 0.7642 | -0.2737 | 0.7642 | 0.8742 |
| 0.1294 | 2.5770 | 1188 | 0.7583 | -0.0233 | 0.7583 | 0.8708 |
| 0.1294 | 2.5813 | 1190 | 0.8256 | -0.0233 | 0.8256 | 0.9086 |
| 0.1294 | 2.5857 | 1192 | 1.0324 | -0.0645 | 1.0324 | 1.0161 |
| 0.1294 | 2.5900 | 1194 | 1.0743 | -0.0645 | 1.0743 | 1.0365 |
| 0.1294 | 2.5944 | 1196 | 0.9035 | 0.2414 | 0.9035 | 0.9505 |
| 0.1294 | 2.5987 | 1198 | 0.7630 | -0.0233 | 0.7630 | 0.8735 |
| 0.1294 | 2.6030 | 1200 | 0.7422 | -0.0233 | 0.7422 | 0.8615 |
| 0.1294 | 2.6074 | 1202 | 0.7600 | 0.0 | 0.7600 | 0.8718 |
| 0.1294 | 2.6117 | 1204 | 0.8121 | 0.0 | 0.8121 | 0.9011 |
| 0.1294 | 2.6161 | 1206 | 0.9161 | -0.1085 | 0.9161 | 0.9572 |
| 0.1294 | 2.6204 | 1208 | 0.8692 | 0.0 | 0.8692 | 0.9323 |
| 0.1294 | 2.6247 | 1210 | 0.8014 | -0.0233 | 0.8014 | 0.8952 |
| 0.1294 | 2.6291 | 1212 | 0.8233 | -0.0233 | 0.8233 | 0.9074 |
| 0.1294 | 2.6334 | 1214 | 0.8970 | -0.0233 | 0.8970 | 0.9471 |
| 0.1294 | 2.6377 | 1216 | 1.0236 | -0.1085 | 1.0236 | 1.0117 |
| 0.1294 | 2.6421 | 1218 | 1.1842 | -0.1786 | 1.1842 | 1.0882 |
| 0.1294 | 2.6464 | 1220 | 1.1733 | -0.1786 | 1.1733 | 1.0832 |
| 0.1294 | 2.6508 | 1222 | 1.0163 | -0.0845 | 1.0163 | 1.0081 |
| 0.1294 | 2.6551 | 1224 | 0.8901 | 0.0 | 0.8901 | 0.9434 |
| 0.1294 | 2.6594 | 1226 | 0.8056 | -0.0233 | 0.8056 | 0.8976 |
| 0.1294 | 2.6638 | 1228 | 0.8057 | -0.0233 | 0.8057 | 0.8976 |
| 0.1294 | 2.6681 | 1230 | 0.8420 | -0.0233 | 0.8420 | 0.9176 |
| 0.1294 | 2.6725 | 1232 | 0.9172 | 0.0 | 0.9172 | 0.9577 |
| 0.1294 | 2.6768 | 1234 | 0.8963 | 0.0 | 0.8963 | 0.9467 |
| 0.1294 | 2.6811 | 1236 | 0.8495 | 0.0 | 0.8495 | 0.9217 |
| 0.1294 | 2.6855 | 1238 | 0.7944 | -0.0233 | 0.7944 | 0.8913 |
| 0.1294 | 2.6898 | 1240 | 0.7696 | -0.0233 | 0.7696 | 0.8773 |
| 0.1294 | 2.6941 | 1242 | 0.7634 | -0.0233 | 0.7634 | 0.8737 |
| 0.1294 | 2.6985 | 1244 | 0.7855 | -0.0233 | 0.7855 | 0.8863 |
| 0.1294 | 2.7028 | 1246 | 0.8925 | 0.2667 | 0.8925 | 0.9447 |
| 0.1294 | 2.7072 | 1248 | 0.9571 | -0.1085 | 0.9571 | 0.9783 |
| 0.1294 | 2.7115 | 1250 | 0.9167 | -0.1748 | 0.9167 | 0.9574 |
| 0.1294 | 2.7158 | 1252 | 0.8539 | -0.0233 | 0.8539 | 0.9241 |
| 0.1294 | 2.7202 | 1254 | 0.8615 | 0.2222 | 0.8615 | 0.9282 |
| 0.1294 | 2.7245 | 1256 | 0.9181 | -0.1085 | 0.9181 | 0.9582 |
| 0.1294 | 2.7289 | 1258 | 0.9063 | 0.2667 | 0.9063 | 0.9520 |
| 0.1294 | 2.7332 | 1260 | 0.8423 | -0.0233 | 0.8423 | 0.9178 |
| 0.1294 | 2.7375 | 1262 | 0.8304 | -0.0233 | 0.8304 | 0.9112 |
| 0.1294 | 2.7419 | 1264 | 0.8324 | -0.0233 | 0.8324 | 0.9123 |
| 0.1294 | 2.7462 | 1266 | 0.8692 | 0.2222 | 0.8692 | 0.9323 |
| 0.1294 | 2.7505 | 1268 | 0.8434 | 0.2222 | 0.8434 | 0.9184 |
| 0.1294 | 2.7549 | 1270 | 0.7951 | -0.0233 | 0.7951 | 0.8917 |
| 0.1294 | 2.7592 | 1272 | 0.7989 | -0.0233 | 0.7989 | 0.8938 |
| 0.1294 | 2.7636 | 1274 | 0.8612 | 0.2222 | 0.8612 | 0.9280 |
| 0.1294 | 2.7679 | 1276 | 0.9839 | -0.1085 | 0.9839 | 0.9919 |
| 0.1294 | 2.7722 | 1278 | 1.0455 | -0.0845 | 1.0455 | 1.0225 |
| 0.1294 | 2.7766 | 1280 | 1.0945 | -0.0645 | 1.0945 | 1.0462 |
| 0.1294 | 2.7809 | 1282 | 0.9889 | -0.0845 | 0.9889 | 0.9944 |
| 0.1294 | 2.7852 | 1284 | 0.8930 | -0.1085 | 0.8930 | 0.9450 |
| 0.1294 | 2.7896 | 1286 | 0.7936 | 0.2667 | 0.7936 | 0.8908 |
| 0.1294 | 2.7939 | 1288 | 0.7141 | -0.0233 | 0.7141 | 0.8450 |
| 0.1294 | 2.7983 | 1290 | 0.6980 | -0.0233 | 0.6980 | 0.8355 |
| 0.1294 | 2.8026 | 1292 | 0.7178 | 0.0 | 0.7178 | 0.8472 |
| 0.1294 | 2.8069 | 1294 | 0.7619 | 0.2667 | 0.7619 | 0.8729 |
| 0.1294 | 2.8113 | 1296 | 0.8313 | 0.2414 | 0.8313 | 0.9117 |
| 0.1294 | 2.8156 | 1298 | 0.8830 | -0.1085 | 0.8830 | 0.9397 |
| 0.1294 | 2.8200 | 1300 | 0.8080 | 0.2524 | 0.8080 | 0.8989 |
| 0.1294 | 2.8243 | 1302 | 0.7278 | -0.0233 | 0.7278 | 0.8531 |
| 0.1294 | 2.8286 | 1304 | 0.7301 | 0.0 | 0.7301 | 0.8544 |
| 0.1294 | 2.8330 | 1306 | 0.7766 | 0.2667 | 0.7766 | 0.8813 |
| 0.1294 | 2.8373 | 1308 | 0.8437 | 0.2414 | 0.8437 | 0.9185 |
| 0.1294 | 2.8416 | 1310 | 0.7946 | 0.2524 | 0.7946 | 0.8914 |
| 0.1294 | 2.8460 | 1312 | 0.7508 | 0.0 | 0.7508 | 0.8665 |
| 0.1294 | 2.8503 | 1314 | 0.7526 | 0.0 | 0.7526 | 0.8675 |
| 0.1294 | 2.8547 | 1316 | 0.7815 | 0.0 | 0.7815 | 0.8840 |
| 0.1294 | 2.8590 | 1318 | 0.7916 | 0.0 | 0.7916 | 0.8897 |
| 0.1294 | 2.8633 | 1320 | 0.7576 | 0.0 | 0.7576 | 0.8704 |
| 0.1294 | 2.8677 | 1322 | 0.7118 | 0.0 | 0.7118 | 0.8437 |
| 0.1294 | 2.8720 | 1324 | 0.7038 | -0.0233 | 0.7038 | 0.8389 |
| 0.1294 | 2.8764 | 1326 | 0.7171 | -0.0233 | 0.7171 | 0.8468 |
| 0.1294 | 2.8807 | 1328 | 0.7476 | -0.0233 | 0.7476 | 0.8646 |
| 0.1294 | 2.8850 | 1330 | 0.8733 | -0.1085 | 0.8733 | 0.9345 |
| 0.1294 | 2.8894 | 1332 | 1.0505 | -0.1085 | 1.0505 | 1.0249 |
| 0.1294 | 2.8937 | 1334 | 1.0532 | -0.1085 | 1.0532 | 1.0263 |
| 0.1294 | 2.8980 | 1336 | 1.0173 | -0.1085 | 1.0173 | 1.0086 |
| 0.1294 | 2.9024 | 1338 | 0.8921 | -0.1085 | 0.8921 | 0.9445 |
| 0.1294 | 2.9067 | 1340 | 0.9185 | -0.1085 | 0.9185 | 0.9584 |
| 0.1294 | 2.9111 | 1342 | 0.8985 | -0.1085 | 0.8985 | 0.9479 |
| 0.1294 | 2.9154 | 1344 | 0.8939 | -0.1085 | 0.8939 | 0.9455 |
| 0.1294 | 2.9197 | 1346 | 0.8907 | -0.1085 | 0.8907 | 0.9438 |
| 0.1294 | 2.9241 | 1348 | 0.8292 | 0.2524 | 0.8292 | 0.9106 |
| 0.1294 | 2.9284 | 1350 | 0.8838 | -0.1085 | 0.8838 | 0.9401 |
| 0.1294 | 2.9328 | 1352 | 1.0314 | -0.1085 | 1.0314 | 1.0156 |
| 0.1294 | 2.9371 | 1354 | 1.0946 | -0.0645 | 1.0946 | 1.0462 |
| 0.1294 | 2.9414 | 1356 | 0.9684 | -0.1085 | 0.9684 | 0.9840 |
| 0.1294 | 2.9458 | 1358 | 0.9063 | -0.1085 | 0.9063 | 0.9520 |
| 0.1294 | 2.9501 | 1360 | 0.9447 | -0.1085 | 0.9447 | 0.9720 |
| 0.1294 | 2.9544 | 1362 | 1.0365 | -0.1085 | 1.0365 | 1.0181 |
| 0.1294 | 2.9588 | 1364 | 0.9885 | -0.1085 | 0.9885 | 0.9942 |
| 0.1294 | 2.9631 | 1366 | 0.8530 | 0.2524 | 0.8530 | 0.9236 |
| 0.1294 | 2.9675 | 1368 | 0.8279 | 0.0222 | 0.8279 | 0.9099 |
| 0.1294 | 2.9718 | 1370 | 0.9011 | -0.1085 | 0.9011 | 0.9492 |
| 0.1294 | 2.9761 | 1372 | 0.9321 | -0.1085 | 0.9321 | 0.9655 |
| 0.1294 | 2.9805 | 1374 | 1.0065 | -0.0845 | 1.0065 | 1.0033 |
| 0.1294 | 2.9848 | 1376 | 0.9843 | -0.1085 | 0.9843 | 0.9921 |
| 0.1294 | 2.9892 | 1378 | 0.8764 | -0.1379 | 0.8764 | 0.9362 |
| 0.1294 | 2.9935 | 1380 | 0.8363 | 0.0 | 0.8363 | 0.9145 |
| 0.1294 | 2.9978 | 1382 | 0.8188 | 0.0 | 0.8188 | 0.9049 |
| 0.1294 | 3.0022 | 1384 | 0.8116 | 0.2667 | 0.8116 | 0.9009 |
| 0.1294 | 3.0065 | 1386 | 0.9211 | -0.1085 | 0.9211 | 0.9598 |
| 0.1294 | 3.0108 | 1388 | 1.0282 | -0.0845 | 1.0282 | 1.0140 |
| 0.1294 | 3.0152 | 1390 | 1.0520 | -0.0845 | 1.0520 | 1.0257 |
| 0.1294 | 3.0195 | 1392 | 0.9526 | -0.1085 | 0.9526 | 0.9760 |
| 0.1294 | 3.0239 | 1394 | 0.8067 | -0.0233 | 0.8067 | 0.8982 |
| 0.1294 | 3.0282 | 1396 | 0.7626 | -0.0233 | 0.7626 | 0.8733 |
| 0.1294 | 3.0325 | 1398 | 0.7671 | -0.0233 | 0.7671 | 0.8758 |
| 0.1294 | 3.0369 | 1400 | 0.8200 | -0.0233 | 0.8200 | 0.9055 |
| 0.1294 | 3.0412 | 1402 | 0.8994 | -0.1085 | 0.8994 | 0.9484 |
| 0.1294 | 3.0456 | 1404 | 0.9130 | -0.1085 | 0.9130 | 0.9555 |
| 0.1294 | 3.0499 | 1406 | 0.8738 | -0.1379 | 0.8738 | 0.9348 |
| 0.1294 | 3.0542 | 1408 | 0.8191 | -0.0233 | 0.8191 | 0.9050 |
| 0.1294 | 3.0586 | 1410 | 0.8423 | 0.0 | 0.8423 | 0.9178 |
| 0.1294 | 3.0629 | 1412 | 0.9059 | -0.1085 | 0.9059 | 0.9518 |
| 0.1294 | 3.0672 | 1414 | 0.9830 | -0.1085 | 0.9830 | 0.9914 |
| 0.1294 | 3.0716 | 1416 | 1.0022 | -0.1085 | 1.0022 | 1.0011 |
| 0.1294 | 3.0759 | 1418 | 0.9371 | -0.1085 | 0.9371 | 0.9681 |
| 0.1294 | 3.0803 | 1420 | 0.8459 | -0.4667 | 0.8459 | 0.9197 |
| 0.1294 | 3.0846 | 1422 | 0.8296 | 0.0 | 0.8296 | 0.9108 |
| 0.1294 | 3.0889 | 1424 | 0.8847 | -0.1379 | 0.8847 | 0.9406 |
| 0.1294 | 3.0933 | 1426 | 0.9566 | -0.1085 | 0.9566 | 0.9780 |
| 0.1294 | 3.0976 | 1428 | 0.9662 | -0.1085 | 0.9662 | 0.9830 |
| 0.1294 | 3.1020 | 1430 | 0.8925 | -0.3276 | 0.8925 | 0.9447 |
| 0.1294 | 3.1063 | 1432 | 0.8560 | -0.0233 | 0.8560 | 0.9252 |
| 0.1294 | 3.1106 | 1434 | 0.8762 | -0.0233 | 0.8762 | 0.9361 |
| 0.1294 | 3.1150 | 1436 | 0.9733 | -0.1085 | 0.9733 | 0.9866 |
| 0.1294 | 3.1193 | 1438 | 1.0849 | -0.0845 | 1.0849 | 1.0416 |
| 0.1294 | 3.1236 | 1440 | 1.1162 | -0.1547 | 1.1162 | 1.0565 |
| 0.1294 | 3.1280 | 1442 | 1.0359 | -0.0845 | 1.0359 | 1.0178 |
| 0.1294 | 3.1323 | 1444 | 0.9485 | -0.1085 | 0.9485 | 0.9739 |
| 0.1294 | 3.1367 | 1446 | 0.8727 | 0.2524 | 0.8727 | 0.9342 |
| 0.1294 | 3.1410 | 1448 | 0.8819 | -0.1085 | 0.8819 | 0.9391 |
| 0.1294 | 3.1453 | 1450 | 0.9472 | -0.1085 | 0.9472 | 0.9733 |
| 0.1294 | 3.1497 | 1452 | 1.0992 | -0.0476 | 1.0992 | 1.0484 |
| 0.1294 | 3.1540 | 1454 | 1.1516 | -0.0097 | 1.1516 | 1.0731 |
| 0.1294 | 3.1584 | 1456 | 1.0585 | -0.0476 | 1.0585 | 1.0289 |
| 0.1294 | 3.1627 | 1458 | 0.9158 | -0.1085 | 0.9158 | 0.9570 |
| 0.1294 | 3.1670 | 1460 | 0.8246 | 0.0 | 0.8246 | 0.9081 |
| 0.1294 | 3.1714 | 1462 | 0.7891 | -0.0233 | 0.7891 | 0.8883 |
| 0.1294 | 3.1757 | 1464 | 0.7945 | -0.0233 | 0.7945 | 0.8914 |
| 0.1294 | 3.1800 | 1466 | 0.7965 | 0.0 | 0.7965 | 0.8924 |
| 0.1294 | 3.1844 | 1468 | 0.7861 | -0.0233 | 0.7861 | 0.8866 |
| 0.1294 | 3.1887 | 1470 | 0.8140 | 0.2667 | 0.8140 | 0.9022 |
| 0.1294 | 3.1931 | 1472 | 0.8810 | -0.1379 | 0.8810 | 0.9386 |
| 0.1294 | 3.1974 | 1474 | 0.9118 | -0.1085 | 0.9118 | 0.9549 |
| 0.1294 | 3.2017 | 1476 | 0.9122 | -0.1085 | 0.9122 | 0.9551 |
| 0.1294 | 3.2061 | 1478 | 0.8892 | -0.1786 | 0.8892 | 0.9430 |
| 0.1294 | 3.2104 | 1480 | 0.9002 | -0.1786 | 0.9002 | 0.9488 |
| 0.1294 | 3.2148 | 1482 | 0.9592 | -0.1085 | 0.9592 | 0.9794 |
| 0.1294 | 3.2191 | 1484 | 0.9535 | -0.1085 | 0.9535 | 0.9765 |
| 0.1294 | 3.2234 | 1486 | 0.8673 | 0.2222 | 0.8673 | 0.9313 |
| 0.1294 | 3.2278 | 1488 | 0.8285 | -0.0233 | 0.8285 | 0.9102 |
| 0.1294 | 3.2321 | 1490 | 0.8366 | -0.0233 | 0.8366 | 0.9147 |
| 0.1294 | 3.2364 | 1492 | 0.8884 | -0.1786 | 0.8884 | 0.9425 |
| 0.1294 | 3.2408 | 1494 | 1.0466 | -0.1085 | 1.0466 | 1.0230 |
| 0.1294 | 3.2451 | 1496 | 1.1195 | -0.2065 | 1.1195 | 1.0581 |
| 0.1294 | 3.2495 | 1498 | 1.0550 | -0.1085 | 1.0550 | 1.0271 |
| 0.0943 | 3.2538 | 1500 | 0.9237 | -0.1085 | 0.9237 | 0.9611 |
| 0.0943 | 3.2581 | 1502 | 0.8154 | 0.0 | 0.8154 | 0.9030 |
| 0.0943 | 3.2625 | 1504 | 0.7962 | -0.0233 | 0.7962 | 0.8923 |
| 0.0943 | 3.2668 | 1506 | 0.8210 | -0.0233 | 0.8210 | 0.9061 |
| 0.0943 | 3.2711 | 1508 | 0.9349 | -0.1085 | 0.9349 | 0.9669 |
| 0.0943 | 3.2755 | 1510 | 1.1090 | -0.2065 | 1.1090 | 1.0531 |
| 0.0943 | 3.2798 | 1512 | 1.2055 | -0.2065 | 1.2055 | 1.0979 |
| 0.0943 | 3.2842 | 1514 | 1.1489 | -0.2065 | 1.1489 | 1.0719 |
| 0.0943 | 3.2885 | 1516 | 1.0131 | -0.1159 | 1.0131 | 1.0065 |
| 0.0943 | 3.2928 | 1518 | 0.9838 | -0.1159 | 0.9838 | 0.9919 |
| 0.0943 | 3.2972 | 1520 | 1.0453 | -0.1159 | 1.0453 | 1.0224 |
| 0.0943 | 3.3015 | 1522 | 1.1701 | -0.2065 | 1.1701 | 1.0817 |
| 0.0943 | 3.3059 | 1524 | 1.1739 | -0.2065 | 1.1739 | 1.0835 |
| 0.0943 | 3.3102 | 1526 | 1.0692 | -0.1085 | 1.0692 | 1.0340 |
| 0.0943 | 3.3145 | 1528 | 0.9931 | -0.1085 | 0.9931 | 0.9966 |
| 0.0943 | 3.3189 | 1530 | 0.9714 | -0.1159 | 0.9714 | 0.9856 |
| 0.0943 | 3.3232 | 1532 | 0.9567 | -0.1786 | 0.9567 | 0.9781 |
| 0.0943 | 3.3275 | 1534 | 0.9948 | -0.1159 | 0.9948 | 0.9974 |
| 0.0943 | 3.3319 | 1536 | 0.9996 | -0.1159 | 0.9996 | 0.9998 |
| 0.0943 | 3.3362 | 1538 | 1.0050 | -0.1159 | 1.0050 | 1.0025 |
| 0.0943 | 3.3406 | 1540 | 0.9650 | -0.1159 | 0.9650 | 0.9824 |
| 0.0943 | 3.3449 | 1542 | 0.8997 | -0.4444 | 0.8997 | 0.9485 |
| 0.0943 | 3.3492 | 1544 | 0.8933 | -0.4444 | 0.8933 | 0.9452 |
| 0.0943 | 3.3536 | 1546 | 0.8893 | -0.4444 | 0.8893 | 0.9430 |
| 0.0943 | 3.3579 | 1548 | 0.9449 | -0.1085 | 0.9449 | 0.9720 |
| 0.0943 | 3.3623 | 1550 | 1.0436 | -0.1085 | 1.0436 | 1.0216 |
| 0.0943 | 3.3666 | 1552 | 1.0504 | -0.1085 | 1.0504 | 1.0249 |
| 0.0943 | 3.3709 | 1554 | 0.9763 | -0.1085 | 0.9763 | 0.9881 |
| 0.0943 | 3.3753 | 1556 | 0.8912 | -0.4667 | 0.8912 | 0.9440 |
| 0.0943 | 3.3796 | 1558 | 0.8509 | -0.0233 | 0.8509 | 0.9224 |
| 0.0943 | 3.3839 | 1560 | 0.8629 | -0.0233 | 0.8629 | 0.9289 |
| 0.0943 | 3.3883 | 1562 | 0.8806 | -0.0233 | 0.8806 | 0.9384 |
| 0.0943 | 3.3926 | 1564 | 0.9479 | -0.4444 | 0.9479 | 0.9736 |
| 0.0943 | 3.3970 | 1566 | 0.9888 | -0.1159 | 0.9888 | 0.9944 |
| 0.0943 | 3.4013 | 1568 | 0.9573 | -0.1786 | 0.9573 | 0.9784 |
| 0.0943 | 3.4056 | 1570 | 0.8953 | -0.0233 | 0.8953 | 0.9462 |
| 0.0943 | 3.4100 | 1572 | 0.8852 | -0.0233 | 0.8852 | 0.9409 |
| 0.0943 | 3.4143 | 1574 | 0.8781 | -0.0233 | 0.8781 | 0.9371 |
| 0.0943 | 3.4187 | 1576 | 0.9035 | -0.4444 | 0.9035 | 0.9505 |
| 0.0943 | 3.4230 | 1578 | 0.9973 | -0.1159 | 0.9973 | 0.9986 |
| 0.0943 | 3.4273 | 1580 | 1.1495 | -0.0845 | 1.1495 | 1.0721 |
| 0.0943 | 3.4317 | 1582 | 1.1578 | -0.0845 | 1.1578 | 1.0760 |
| 0.0943 | 3.4360 | 1584 | 1.0766 | -0.1085 | 1.0766 | 1.0376 |
| 0.0943 | 3.4403 | 1586 | 0.9602 | -0.4444 | 0.9602 | 0.9799 |
| 0.0943 | 3.4447 | 1588 | 0.9122 | -0.0233 | 0.9122 | 0.9551 |
| 0.0943 | 3.4490 | 1590 | 0.8975 | -0.0233 | 0.8975 | 0.9474 |
| 0.0943 | 3.4534 | 1592 | 0.8960 | -0.0233 | 0.8960 | 0.9466 |
| 0.0943 | 3.4577 | 1594 | 0.9424 | -0.4444 | 0.9424 | 0.9708 |
| 0.0943 | 3.4620 | 1596 | 1.0232 | -0.1085 | 1.0232 | 1.0115 |
| 0.0943 | 3.4664 | 1598 | 1.0998 | -0.0845 | 1.0998 | 1.0487 |
| 0.0943 | 3.4707 | 1600 | 1.0940 | -0.0845 | 1.0940 | 1.0459 |
| 0.0943 | 3.4751 | 1602 | 0.9860 | -0.1085 | 0.9860 | 0.9930 |
| 0.0943 | 3.4794 | 1604 | 0.8568 | -0.0233 | 0.8568 | 0.9256 |
| 0.0943 | 3.4837 | 1606 | 0.8303 | -0.0233 | 0.8303 | 0.9112 |
| 0.0943 | 3.4881 | 1608 | 0.8390 | -0.0233 | 0.8390 | 0.9160 |
| 0.0943 | 3.4924 | 1610 | 0.8657 | -0.0233 | 0.8657 | 0.9305 |
| 0.0943 | 3.4967 | 1612 | 0.8944 | -0.4444 | 0.8944 | 0.9457 |
| 0.0943 | 3.5011 | 1614 | 0.9187 | -0.4444 | 0.9187 | 0.9585 |
| 0.0943 | 3.5054 | 1616 | 0.9433 | -0.4444 | 0.9433 | 0.9712 |
| 0.0943 | 3.5098 | 1618 | 0.9703 | -0.4444 | 0.9703 | 0.9850 |
| 0.0943 | 3.5141 | 1620 | 0.9664 | -0.1440 | 0.9664 | 0.9831 |
| 0.0943 | 3.5184 | 1622 | 0.9212 | -0.4444 | 0.9212 | 0.9598 |
| 0.0943 | 3.5228 | 1624 | 0.8955 | -0.0233 | 0.8955 | 0.9463 |
| 0.0943 | 3.5271 | 1626 | 0.9080 | -0.0233 | 0.9080 | 0.9529 |
| 0.0943 | 3.5315 | 1628 | 0.9080 | -0.0233 | 0.9080 | 0.9529 |
| 0.0943 | 3.5358 | 1630 | 0.9265 | -0.1440 | 0.9265 | 0.9625 |
| 0.0943 | 3.5401 | 1632 | 0.9378 | -0.1440 | 0.9378 | 0.9684 |
| 0.0943 | 3.5445 | 1634 | 0.9230 | -0.1440 | 0.9230 | 0.9607 |
| 0.0943 | 3.5488 | 1636 | 0.8712 | -0.0233 | 0.8712 | 0.9334 |
| 0.0943 | 3.5531 | 1638 | 0.8488 | -0.0233 | 0.8488 | 0.9213 |
| 0.0943 | 3.5575 | 1640 | 0.8604 | -0.4444 | 0.8604 | 0.9276 |
| 0.0943 | 3.5618 | 1642 | 0.9232 | -0.1379 | 0.9232 | 0.9608 |
| 0.0943 | 3.5662 | 1644 | 0.9373 | -0.1085 | 0.9373 | 0.9682 |
| 0.0943 | 3.5705 | 1646 | 0.9000 | -0.1379 | 0.9000 | 0.9487 |
| 0.0943 | 3.5748 | 1648 | 0.8947 | -0.1379 | 0.8947 | 0.9459 |
| 0.0943 | 3.5792 | 1650 | 0.8399 | -0.4667 | 0.8399 | 0.9164 |
| 0.0943 | 3.5835 | 1652 | 0.8061 | -0.0233 | 0.8061 | 0.8978 |
| 0.0943 | 3.5879 | 1654 | 0.8103 | -0.0233 | 0.8103 | 0.9002 |
| 0.0943 | 3.5922 | 1656 | 0.7960 | -0.0233 | 0.7960 | 0.8922 |
| 0.0943 | 3.5965 | 1658 | 0.7849 | 0.0 | 0.7849 | 0.8859 |
| 0.0943 | 3.6009 | 1660 | 0.8118 | -0.1748 | 0.8118 | 0.9010 |
| 0.0943 | 3.6052 | 1662 | 0.8176 | -0.1748 | 0.8176 | 0.9042 |
| 0.0943 | 3.6095 | 1664 | 0.7686 | 0.0 | 0.7686 | 0.8767 |
| 0.0943 | 3.6139 | 1666 | 0.7195 | 0.0 | 0.7195 | 0.8482 |
| 0.0943 | 3.6182 | 1668 | 0.7154 | -0.0233 | 0.7154 | 0.8458 |
| 0.0943 | 3.6226 | 1670 | 0.7409 | 0.0 | 0.7409 | 0.8607 |
| 0.0943 | 3.6269 | 1672 | 0.8294 | 0.2667 | 0.8294 | 0.9107 |
| 0.0943 | 3.6312 | 1674 | 0.9022 | -0.1085 | 0.9022 | 0.9498 |
| 0.0943 | 3.6356 | 1676 | 0.8721 | -0.1748 | 0.8721 | 0.9339 |
| 0.0943 | 3.6399 | 1678 | 0.7751 | 0.0 | 0.7751 | 0.8804 |
| 0.0943 | 3.6443 | 1680 | 0.7276 | -0.0233 | 0.7276 | 0.8530 |
| 0.0943 | 3.6486 | 1682 | 0.7277 | -0.0233 | 0.7277 | 0.8530 |
| 0.0943 | 3.6529 | 1684 | 0.7478 | -0.0233 | 0.7478 | 0.8648 |
| 0.0943 | 3.6573 | 1686 | 0.8431 | 0.2667 | 0.8431 | 0.9182 |
| 0.0943 | 3.6616 | 1688 | 0.9896 | -0.1085 | 0.9896 | 0.9948 |
| 0.0943 | 3.6659 | 1690 | 1.0743 | -0.0645 | 1.0743 | 1.0365 |
| 0.0943 | 3.6703 | 1692 | 0.9992 | -0.1085 | 0.9992 | 0.9996 |
| 0.0943 | 3.6746 | 1694 | 0.8785 | -0.1379 | 0.8785 | 0.9373 |
| 0.0943 | 3.6790 | 1696 | 0.7708 | -0.0233 | 0.7708 | 0.8779 |
| 0.0943 | 3.6833 | 1698 | 0.7483 | -0.0233 | 0.7483 | 0.8651 |
| 0.0943 | 3.6876 | 1700 | 0.7656 | -0.0233 | 0.7656 | 0.8750 |
| 0.0943 | 3.6920 | 1702 | 0.8453 | -0.1748 | 0.8453 | 0.9194 |
| 0.0943 | 3.6963 | 1704 | 0.9123 | -0.1085 | 0.9123 | 0.9551 |
| 0.0943 | 3.7007 | 1706 | 1.0214 | -0.1085 | 1.0214 | 1.0106 |
| 0.0943 | 3.7050 | 1708 | 1.0174 | -0.1085 | 1.0174 | 1.0087 |
| 0.0943 | 3.7093 | 1710 | 0.9397 | -0.1085 | 0.9397 | 0.9694 |
| 0.0943 | 3.7137 | 1712 | 0.8186 | 0.2667 | 0.8186 | 0.9048 |
| 0.0943 | 3.7180 | 1714 | 0.7702 | 0.2667 | 0.7702 | 0.8776 |
| 0.0943 | 3.7223 | 1716 | 0.7415 | 0.0 | 0.7415 | 0.8611 |
| 0.0943 | 3.7267 | 1718 | 0.7228 | 0.0 | 0.7228 | 0.8502 |
| 0.0943 | 3.7310 | 1720 | 0.7319 | 0.0 | 0.7319 | 0.8555 |
| 0.0943 | 3.7354 | 1722 | 0.7720 | 0.0 | 0.7720 | 0.8786 |
| 0.0943 | 3.7397 | 1724 | 0.7968 | 0.0 | 0.7968 | 0.8927 |
| 0.0943 | 3.7440 | 1726 | 0.7831 | 0.0 | 0.7831 | 0.8849 |
| 0.0943 | 3.7484 | 1728 | 0.8023 | 0.0 | 0.8023 | 0.8957 |
| 0.0943 | 3.7527 | 1730 | 0.8075 | 0.0 | 0.8075 | 0.8986 |
| 0.0943 | 3.7570 | 1732 | 0.7933 | -0.0233 | 0.7933 | 0.8907 |
| 0.0943 | 3.7614 | 1734 | 0.7986 | 0.0 | 0.7986 | 0.8936 |
| 0.0943 | 3.7657 | 1736 | 0.8444 | -0.1748 | 0.8444 | 0.9189 |
| 0.0943 | 3.7701 | 1738 | 0.9492 | -0.1085 | 0.9492 | 0.9743 |
| 0.0943 | 3.7744 | 1740 | 0.9711 | -0.1085 | 0.9711 | 0.9854 |
| 0.0943 | 3.7787 | 1742 | 0.9569 | -0.1085 | 0.9569 | 0.9782 |
| 0.0943 | 3.7831 | 1744 | 0.8643 | -0.1748 | 0.8643 | 0.9297 |
| 0.0943 | 3.7874 | 1746 | 0.8130 | -0.0233 | 0.8130 | 0.9017 |
| 0.0943 | 3.7918 | 1748 | 0.8201 | -0.0233 | 0.8201 | 0.9056 |
| 0.0943 | 3.7961 | 1750 | 0.8415 | -0.0233 | 0.8415 | 0.9173 |
| 0.0943 | 3.8004 | 1752 | 0.8950 | -0.1748 | 0.8950 | 0.9461 |
| 0.0943 | 3.8048 | 1754 | 0.9175 | -0.1748 | 0.9175 | 0.9579 |
| 0.0943 | 3.8091 | 1756 | 0.8998 | -0.1748 | 0.8998 | 0.9486 |
| 0.0943 | 3.8134 | 1758 | 0.8403 | 0.2222 | 0.8403 | 0.9167 |
| 0.0943 | 3.8178 | 1760 | 0.8323 | -0.0233 | 0.8323 | 0.9123 |
| 0.0943 | 3.8221 | 1762 | 0.8547 | 0.2222 | 0.8547 | 0.9245 |
| 0.0943 | 3.8265 | 1764 | 0.9092 | -0.1748 | 0.9092 | 0.9535 |
| 0.0943 | 3.8308 | 1766 | 0.9451 | -0.1085 | 0.9451 | 0.9722 |
| 0.0943 | 3.8351 | 1768 | 1.0235 | -0.0845 | 1.0235 | 1.0117 |
| 0.0943 | 3.8395 | 1770 | 0.9939 | -0.1085 | 0.9939 | 0.9969 |
| 0.0943 | 3.8438 | 1772 | 0.8995 | -0.1379 | 0.8995 | 0.9484 |
| 0.0943 | 3.8482 | 1774 | 0.8846 | -0.1748 | 0.8846 | 0.9406 |
| 0.0943 | 3.8525 | 1776 | 0.8554 | -0.1748 | 0.8554 | 0.9249 |
| 0.0943 | 3.8568 | 1778 | 0.8430 | -0.1748 | 0.8430 | 0.9181 |
| 0.0943 | 3.8612 | 1780 | 0.8512 | -0.1748 | 0.8512 | 0.9226 |
| 0.0943 | 3.8655 | 1782 | 0.9131 | -0.1379 | 0.9131 | 0.9556 |
| 0.0943 | 3.8698 | 1784 | 0.9442 | -0.1379 | 0.9442 | 0.9717 |
| 0.0943 | 3.8742 | 1786 | 0.8799 | -0.1379 | 0.8799 | 0.9380 |
| 0.0943 | 3.8785 | 1788 | 0.8044 | 0.2667 | 0.8044 | 0.8969 |
| 0.0943 | 3.8829 | 1790 | 0.7650 | 0.0 | 0.7650 | 0.8746 |
| 0.0943 | 3.8872 | 1792 | 0.7799 | 0.0 | 0.7799 | 0.8831 |
| 0.0943 | 3.8915 | 1794 | 0.8428 | 0.2667 | 0.8428 | 0.9180 |
| 0.0943 | 3.8959 | 1796 | 0.9758 | -0.1085 | 0.9758 | 0.9879 |
| 0.0943 | 3.9002 | 1798 | 1.1190 | -0.2065 | 1.1190 | 1.0578 |
| 0.0943 | 3.9046 | 1800 | 1.1249 | -0.2065 | 1.1249 | 1.0606 |
| 0.0943 | 3.9089 | 1802 | 1.0262 | -0.0845 | 1.0262 | 1.0130 |
| 0.0943 | 3.9132 | 1804 | 0.8945 | -0.1748 | 0.8945 | 0.9458 |
| 0.0943 | 3.9176 | 1806 | 0.8692 | -0.1748 | 0.8692 | 0.9323 |
| 0.0943 | 3.9219 | 1808 | 0.9075 | -0.1748 | 0.9075 | 0.9527 |
| 0.0943 | 3.9262 | 1810 | 0.9620 | -0.1379 | 0.9620 | 0.9808 |
| 0.0943 | 3.9306 | 1812 | 0.9790 | -0.1379 | 0.9790 | 0.9895 |
| 0.0943 | 3.9349 | 1814 | 0.9431 | -0.1379 | 0.9431 | 0.9711 |
| 0.0943 | 3.9393 | 1816 | 0.9014 | -0.1748 | 0.9014 | 0.9494 |
| 0.0943 | 3.9436 | 1818 | 0.9118 | -0.1379 | 0.9118 | 0.9549 |
| 0.0943 | 3.9479 | 1820 | 1.0050 | -0.1379 | 1.0050 | 1.0025 |
| 0.0943 | 3.9523 | 1822 | 1.0507 | -0.0845 | 1.0507 | 1.0250 |
| 0.0943 | 3.9566 | 1824 | 0.9685 | -0.1379 | 0.9685 | 0.9841 |
| 0.0943 | 3.9610 | 1826 | 0.8397 | 0.0 | 0.8397 | 0.9163 |
| 0.0943 | 3.9653 | 1828 | 0.7907 | -0.0233 | 0.7907 | 0.8892 |
| 0.0943 | 3.9696 | 1830 | 0.7867 | -0.0233 | 0.7867 | 0.8870 |
| 0.0943 | 3.9740 | 1832 | 0.7668 | 0.0 | 0.7668 | 0.8756 |
| 0.0943 | 3.9783 | 1834 | 0.8042 | 0.0 | 0.8042 | 0.8967 |
| 0.0943 | 3.9826 | 1836 | 0.9515 | -0.1379 | 0.9515 | 0.9755 |
| 0.0943 | 3.9870 | 1838 | 1.1416 | -0.1547 | 1.1416 | 1.0685 |
| 0.0943 | 3.9913 | 1840 | 1.2486 | -0.1547 | 1.2486 | 1.1174 |
| 0.0943 | 3.9957 | 1842 | 1.1937 | -0.1786 | 1.1937 | 1.0926 |
| 0.0943 | 4.0 | 1844 | 1.0371 | -0.0845 | 1.0371 | 1.0184 |
| 0.0943 | 4.0043 | 1846 | 0.8464 | -0.1748 | 0.8464 | 0.9200 |
| 0.0943 | 4.0087 | 1848 | 0.7712 | -0.0233 | 0.7712 | 0.8782 |
| 0.0943 | 4.0130 | 1850 | 0.7924 | -0.0421 | 0.7924 | 0.8902 |
| 0.0943 | 4.0174 | 1852 | 0.7887 | -0.0421 | 0.7887 | 0.8881 |
| 0.0943 | 4.0217 | 1854 | 0.7555 | -0.0233 | 0.7555 | 0.8692 |
| 0.0943 | 4.0260 | 1856 | 0.7583 | 0.0 | 0.7583 | 0.8708 |
| 0.0943 | 4.0304 | 1858 | 0.8603 | -0.1748 | 0.8603 | 0.9275 |
| 0.0943 | 4.0347 | 1860 | 0.9234 | -0.1379 | 0.9234 | 0.9609 |
| 0.0943 | 4.0390 | 1862 | 0.9184 | -0.1379 | 0.9184 | 0.9583 |
| 0.0943 | 4.0434 | 1864 | 0.8551 | -0.1748 | 0.8551 | 0.9247 |
| 0.0943 | 4.0477 | 1866 | 0.7881 | 0.0 | 0.7881 | 0.8877 |
| 0.0943 | 4.0521 | 1868 | 0.7770 | 0.0 | 0.7770 | 0.8815 |
| 0.0943 | 4.0564 | 1870 | 0.7921 | -0.0233 | 0.7921 | 0.8900 |
| 0.0943 | 4.0607 | 1872 | 0.8232 | 0.0 | 0.8232 | 0.9073 |
| 0.0943 | 4.0651 | 1874 | 0.9313 | -0.1379 | 0.9313 | 0.9650 |
| 0.0943 | 4.0694 | 1876 | 1.0042 | -0.1085 | 1.0042 | 1.0021 |
| 0.0943 | 4.0738 | 1878 | 0.9814 | -0.1085 | 0.9814 | 0.9907 |
| 0.0943 | 4.0781 | 1880 | 0.9009 | -0.1379 | 0.9009 | 0.9492 |
| 0.0943 | 4.0824 | 1882 | 0.8267 | 0.0 | 0.8267 | 0.9093 |
| 0.0943 | 4.0868 | 1884 | 0.7958 | -0.0233 | 0.7958 | 0.8921 |
| 0.0943 | 4.0911 | 1886 | 0.8159 | -0.0233 | 0.8159 | 0.9033 |
| 0.0943 | 4.0954 | 1888 | 0.8115 | -0.0233 | 0.8115 | 0.9009 |
| 0.0943 | 4.0998 | 1890 | 0.8038 | -0.0233 | 0.8038 | 0.8965 |
| 0.0943 | 4.1041 | 1892 | 0.8450 | 0.0 | 0.8450 | 0.9192 |
| 0.0943 | 4.1085 | 1894 | 0.9339 | -0.1379 | 0.9339 | 0.9664 |
| 0.0943 | 4.1128 | 1896 | 0.9615 | -0.1379 | 0.9615 | 0.9806 |
| 0.0943 | 4.1171 | 1898 | 0.9288 | -0.1379 | 0.9288 | 0.9637 |
| 0.0943 | 4.1215 | 1900 | 0.8498 | -0.4667 | 0.8498 | 0.9218 |
| 0.0943 | 4.1258 | 1902 | 0.7787 | 0.0 | 0.7787 | 0.8824 |
| 0.0943 | 4.1302 | 1904 | 0.7624 | 0.0 | 0.7624 | 0.8731 |
| 0.0943 | 4.1345 | 1906 | 0.7663 | 0.0 | 0.7663 | 0.8754 |
| 0.0943 | 4.1388 | 1908 | 0.7858 | 0.0 | 0.7858 | 0.8864 |
| 0.0943 | 4.1432 | 1910 | 0.8344 | 0.0 | 0.8344 | 0.9135 |
| 0.0943 | 4.1475 | 1912 | 0.9262 | -0.1379 | 0.9262 | 0.9624 |
| 0.0943 | 4.1518 | 1914 | 1.0095 | -0.1085 | 1.0095 | 1.0047 |
| 0.0943 | 4.1562 | 1916 | 0.9931 | -0.1085 | 0.9931 | 0.9965 |
| 0.0943 | 4.1605 | 1918 | 0.9081 | -0.1379 | 0.9081 | 0.9529 |
| 0.0943 | 4.1649 | 1920 | 0.8375 | 0.0 | 0.8375 | 0.9152 |
| 0.0943 | 4.1692 | 1922 | 0.8144 | -0.0233 | 0.8144 | 0.9025 |
| 0.0943 | 4.1735 | 1924 | 0.8221 | -0.0233 | 0.8221 | 0.9067 |
| 0.0943 | 4.1779 | 1926 | 0.8567 | 0.0 | 0.8567 | 0.9256 |
| 0.0943 | 4.1822 | 1928 | 0.9549 | -0.1379 | 0.9549 | 0.9772 |
| 0.0943 | 4.1866 | 1930 | 1.0508 | -0.1085 | 1.0508 | 1.0251 |
| 0.0943 | 4.1909 | 1932 | 1.0729 | -0.0845 | 1.0729 | 1.0358 |
| 0.0943 | 4.1952 | 1934 | 1.0392 | -0.1085 | 1.0392 | 1.0194 |
| 0.0943 | 4.1996 | 1936 | 0.9743 | -0.1085 | 0.9743 | 0.9871 |
| 0.0943 | 4.2039 | 1938 | 0.9051 | -0.1748 | 0.9051 | 0.9514 |
| 0.0943 | 4.2082 | 1940 | 0.8353 | -0.0233 | 0.8353 | 0.9140 |
| 0.0943 | 4.2126 | 1942 | 0.8195 | -0.0233 | 0.8195 | 0.9053 |
| 0.0943 | 4.2169 | 1944 | 0.8250 | -0.0233 | 0.8250 | 0.9083 |
| 0.0943 | 4.2213 | 1946 | 0.8236 | -0.0233 | 0.8236 | 0.9075 |
| 0.0943 | 4.2256 | 1948 | 0.8369 | 0.0 | 0.8369 | 0.9148 |
| 0.0943 | 4.2299 | 1950 | 0.8734 | -0.1748 | 0.8734 | 0.9346 |
| 0.0943 | 4.2343 | 1952 | 0.9168 | -0.1379 | 0.9168 | 0.9575 |
| 0.0943 | 4.2386 | 1954 | 0.9361 | -0.1379 | 0.9361 | 0.9675 |
| 0.0943 | 4.2430 | 1956 | 0.9195 | -0.1379 | 0.9195 | 0.9589 |
| 0.0943 | 4.2473 | 1958 | 0.8739 | -0.1748 | 0.8739 | 0.9349 |
| 0.0943 | 4.2516 | 1960 | 0.8520 | 0.2667 | 0.8520 | 0.9230 |
| 0.0943 | 4.2560 | 1962 | 0.8392 | 0.0 | 0.8392 | 0.9161 |
| 0.0943 | 4.2603 | 1964 | 0.8483 | 0.0 | 0.8483 | 0.9210 |
| 0.0943 | 4.2646 | 1966 | 0.8568 | 0.0 | 0.8568 | 0.9257 |
| 0.0943 | 4.2690 | 1968 | 0.8431 | 0.0 | 0.8431 | 0.9182 |
| 0.0943 | 4.2733 | 1970 | 0.8381 | -0.0233 | 0.8381 | 0.9155 |
| 0.0943 | 4.2777 | 1972 | 0.8597 | -0.0233 | 0.8597 | 0.9272 |
| 0.0943 | 4.2820 | 1974 | 0.8673 | 0.0 | 0.8673 | 0.9313 |
| 0.0943 | 4.2863 | 1976 | 0.8912 | 0.2667 | 0.8912 | 0.9441 |
| 0.0943 | 4.2907 | 1978 | 0.9231 | -0.1379 | 0.9231 | 0.9608 |
| 0.0943 | 4.2950 | 1980 | 0.8883 | 0.2667 | 0.8883 | 0.9425 |
| 0.0943 | 4.2993 | 1982 | 0.8556 | 0.0 | 0.8556 | 0.9250 |
| 0.0943 | 4.3037 | 1984 | 0.8648 | -0.0233 | 0.8648 | 0.9300 |
| 0.0943 | 4.3080 | 1986 | 0.9016 | -0.0233 | 0.9016 | 0.9495 |
| 0.0943 | 4.3124 | 1988 | 0.9481 | -0.1786 | 0.9481 | 0.9737 |
| 0.0943 | 4.3167 | 1990 | 1.0042 | -0.1379 | 1.0042 | 1.0021 |
| 0.0943 | 4.3210 | 1992 | 1.0443 | -0.0845 | 1.0443 | 1.0219 |
| 0.0943 | 4.3254 | 1994 | 0.9988 | -0.1379 | 0.9988 | 0.9994 |
| 0.0943 | 4.3297 | 1996 | 0.9688 | -0.1786 | 0.9688 | 0.9843 |
| 0.0943 | 4.3341 | 1998 | 0.9381 | -0.0233 | 0.9381 | 0.9686 |
| 0.0749 | 4.3384 | 2000 | 0.9461 | -0.0233 | 0.9461 | 0.9727 |
| 0.0749 | 4.3427 | 2002 | 0.9532 | -0.0233 | 0.9532 | 0.9763 |
| 0.0749 | 4.3471 | 2004 | 0.9692 | -0.1786 | 0.9692 | 0.9845 |
| 0.0749 | 4.3514 | 2006 | 1.0127 | -0.1379 | 1.0127 | 1.0063 |
| 0.0749 | 4.3557 | 2008 | 1.0092 | -0.1379 | 1.0092 | 1.0046 |
| 0.0749 | 4.3601 | 2010 | 0.9806 | -0.1379 | 0.9806 | 0.9903 |
| 0.0749 | 4.3644 | 2012 | 0.9501 | -0.1748 | 0.9501 | 0.9747 |
| 0.0749 | 4.3688 | 2014 | 0.9090 | 0.2667 | 0.9090 | 0.9534 |
| 0.0749 | 4.3731 | 2016 | 0.8904 | 0.2667 | 0.8904 | 0.9436 |
| 0.0749 | 4.3774 | 2018 | 0.8988 | 0.2667 | 0.8988 | 0.9480 |
| 0.0749 | 4.3818 | 2020 | 0.9305 | -0.1379 | 0.9305 | 0.9646 |
| 0.0749 | 4.3861 | 2022 | 0.9500 | -0.1379 | 0.9500 | 0.9747 |
| 0.0749 | 4.3905 | 2024 | 0.9701 | -0.1379 | 0.9701 | 0.9849 |
| 0.0749 | 4.3948 | 2026 | 0.9199 | -0.1379 | 0.9199 | 0.9591 |
| 0.0749 | 4.3991 | 2028 | 0.8606 | 0.0 | 0.8606 | 0.9277 |
| 0.0749 | 4.4035 | 2030 | 0.8449 | 0.0 | 0.8449 | 0.9192 |
| 0.0749 | 4.4078 | 2032 | 0.8665 | 0.2667 | 0.8665 | 0.9308 |
| 0.0749 | 4.4121 | 2034 | 0.9115 | -0.1379 | 0.9115 | 0.9547 |
| 0.0749 | 4.4165 | 2036 | 0.9083 | -0.1379 | 0.9083 | 0.9530 |
| 0.0749 | 4.4208 | 2038 | 0.9143 | -0.1379 | 0.9143 | 0.9562 |
| 0.0749 | 4.4252 | 2040 | 0.9340 | -0.1379 | 0.9340 | 0.9664 |
| 0.0749 | 4.4295 | 2042 | 0.8821 | -0.1379 | 0.8821 | 0.9392 |
| 0.0749 | 4.4338 | 2044 | 0.8511 | 0.0 | 0.8511 | 0.9225 |
| 0.0749 | 4.4382 | 2046 | 0.8385 | 0.0 | 0.8385 | 0.9157 |
| 0.0749 | 4.4425 | 2048 | 0.8597 | 0.0 | 0.8597 | 0.9272 |
| 0.0749 | 4.4469 | 2050 | 0.8862 | -0.1379 | 0.8862 | 0.9414 |
| 0.0749 | 4.4512 | 2052 | 0.8630 | 0.0 | 0.8630 | 0.9290 |
| 0.0749 | 4.4555 | 2054 | 0.8454 | 0.0 | 0.8454 | 0.9195 |
| 0.0749 | 4.4599 | 2056 | 0.8575 | 0.0 | 0.8575 | 0.9260 |
| 0.0749 | 4.4642 | 2058 | 0.8562 | 0.0 | 0.8562 | 0.9253 |
| 0.0749 | 4.4685 | 2060 | 0.8440 | 0.0 | 0.8440 | 0.9187 |
| 0.0749 | 4.4729 | 2062 | 0.8387 | 0.0 | 0.8387 | 0.9158 |
| 0.0749 | 4.4772 | 2064 | 0.8614 | -0.4667 | 0.8614 | 0.9281 |
| 0.0749 | 4.4816 | 2066 | 0.9377 | -0.1379 | 0.9377 | 0.9683 |
| 0.0749 | 4.4859 | 2068 | 1.0063 | -0.1085 | 1.0063 | 1.0032 |
| 0.0749 | 4.4902 | 2070 | 0.9952 | -0.1085 | 0.9952 | 0.9976 |
| 0.0749 | 4.4946 | 2072 | 0.9560 | -0.1379 | 0.9560 | 0.9778 |
| 0.0749 | 4.4989 | 2074 | 0.9396 | -0.1379 | 0.9396 | 0.9693 |
| 0.0749 | 4.5033 | 2076 | 0.9042 | -0.1748 | 0.9042 | 0.9509 |
| 0.0749 | 4.5076 | 2078 | 0.9010 | -0.1748 | 0.9010 | 0.9492 |
| 0.0749 | 4.5119 | 2080 | 0.9134 | -0.1379 | 0.9134 | 0.9557 |
| 0.0749 | 4.5163 | 2082 | 0.9204 | -0.1379 | 0.9204 | 0.9594 |
| 0.0749 | 4.5206 | 2084 | 0.9284 | -0.1379 | 0.9284 | 0.9635 |
| 0.0749 | 4.5249 | 2086 | 0.9265 | -0.1379 | 0.9265 | 0.9625 |
| 0.0749 | 4.5293 | 2088 | 0.9193 | -0.1379 | 0.9193 | 0.9588 |
| 0.0749 | 4.5336 | 2090 | 0.8666 | -0.4667 | 0.8666 | 0.9309 |
| 0.0749 | 4.5380 | 2092 | 0.8343 | -0.0233 | 0.8343 | 0.9134 |
| 0.0749 | 4.5423 | 2094 | 0.8352 | -0.0233 | 0.8352 | 0.9139 |
| 0.0749 | 4.5466 | 2096 | 0.8373 | -0.0233 | 0.8373 | 0.9151 |
| 0.0749 | 4.5510 | 2098 | 0.8650 | 0.0 | 0.8650 | 0.9300 |
| 0.0749 | 4.5553 | 2100 | 0.9246 | -0.1379 | 0.9246 | 0.9616 |
| 0.0749 | 4.5597 | 2102 | 0.9451 | -0.1379 | 0.9451 | 0.9722 |
| 0.0749 | 4.5640 | 2104 | 0.9359 | -0.1379 | 0.9359 | 0.9674 |
| 0.0749 | 4.5683 | 2106 | 0.8923 | -0.1379 | 0.8923 | 0.9446 |
| 0.0749 | 4.5727 | 2108 | 0.8765 | -0.4667 | 0.8765 | 0.9362 |
| 0.0749 | 4.5770 | 2110 | 0.8750 | 0.0 | 0.8750 | 0.9354 |
| 0.0749 | 4.5813 | 2112 | 0.8800 | 0.0 | 0.8800 | 0.9381 |
| 0.0749 | 4.5857 | 2114 | 0.8832 | -0.0233 | 0.8832 | 0.9398 |
| 0.0749 | 4.5900 | 2116 | 0.8870 | -0.0233 | 0.8870 | 0.9418 |
| 0.0749 | 4.5944 | 2118 | 0.8878 | -0.0233 | 0.8878 | 0.9422 |
| 0.0749 | 4.5987 | 2120 | 0.9051 | -0.3750 | 0.9051 | 0.9513 |
| 0.0749 | 4.6030 | 2122 | 0.9227 | -0.1379 | 0.9227 | 0.9606 |
| 0.0749 | 4.6074 | 2124 | 0.9045 | -0.3883 | 0.9045 | 0.9511 |
| 0.0749 | 4.6117 | 2126 | 0.8951 | -0.3883 | 0.8951 | 0.9461 |
| 0.0749 | 4.6161 | 2128 | 0.9075 | -0.1379 | 0.9075 | 0.9526 |
| 0.0749 | 4.6204 | 2130 | 0.8906 | -0.3883 | 0.8906 | 0.9437 |
| 0.0749 | 4.6247 | 2132 | 0.9136 | -0.1379 | 0.9136 | 0.9558 |
| 0.0749 | 4.6291 | 2134 | 0.8952 | -0.1379 | 0.8952 | 0.9462 |
| 0.0749 | 4.6334 | 2136 | 0.8933 | -0.1379 | 0.8933 | 0.9452 |
| 0.0749 | 4.6377 | 2138 | 0.9083 | -0.1379 | 0.9083 | 0.9530 |
| 0.0749 | 4.6421 | 2140 | 0.8662 | -0.3883 | 0.8662 | 0.9307 |
| 0.0749 | 4.6464 | 2142 | 0.8202 | 0.0 | 0.8202 | 0.9056 |
| 0.0749 | 4.6508 | 2144 | 0.7937 | 0.0 | 0.7937 | 0.8909 |
| 0.0749 | 4.6551 | 2146 | 0.8031 | 0.0 | 0.8031 | 0.8961 |
| 0.0749 | 4.6594 | 2148 | 0.8402 | 0.0 | 0.8402 | 0.9166 |
| 0.0749 | 4.6638 | 2150 | 0.8968 | -0.3883 | 0.8968 | 0.9470 |
| 0.0749 | 4.6681 | 2152 | 0.9555 | -0.1379 | 0.9555 | 0.9775 |
| 0.0749 | 4.6725 | 2154 | 0.9650 | -0.1379 | 0.9650 | 0.9823 |
| 0.0749 | 4.6768 | 2156 | 0.9151 | -0.3883 | 0.9151 | 0.9566 |
| 0.0749 | 4.6811 | 2158 | 0.8566 | -0.4667 | 0.8566 | 0.9255 |
| 0.0749 | 4.6855 | 2160 | 0.8421 | 0.0 | 0.8421 | 0.9177 |
| 0.0749 | 4.6898 | 2162 | 0.8640 | -0.4667 | 0.8640 | 0.9295 |
| 0.0749 | 4.6941 | 2164 | 0.9153 | -0.1379 | 0.9153 | 0.9567 |
| 0.0749 | 4.6985 | 2166 | 0.9134 | -0.1379 | 0.9134 | 0.9557 |
| 0.0749 | 4.7028 | 2168 | 0.8623 | -0.4667 | 0.8623 | 0.9286 |
| 0.0749 | 4.7072 | 2170 | 0.8322 | 0.0 | 0.8322 | 0.9123 |
| 0.0749 | 4.7115 | 2172 | 0.8539 | -0.4667 | 0.8539 | 0.9241 |
| 0.0749 | 4.7158 | 2174 | 0.8983 | -0.1379 | 0.8983 | 0.9478 |
| 0.0749 | 4.7202 | 2176 | 0.9016 | -0.1379 | 0.9016 | 0.9495 |
| 0.0749 | 4.7245 | 2178 | 0.9308 | -0.1379 | 0.9308 | 0.9648 |
| 0.0749 | 4.7289 | 2180 | 0.8912 | -0.1379 | 0.8912 | 0.9440 |
| 0.0749 | 4.7332 | 2182 | 0.8199 | 0.0 | 0.8199 | 0.9055 |
| 0.0749 | 4.7375 | 2184 | 0.8011 | 0.0 | 0.8011 | 0.8951 |
| 0.0749 | 4.7419 | 2186 | 0.8272 | -0.1379 | 0.8272 | 0.9095 |
| 0.0749 | 4.7462 | 2188 | 0.8308 | -0.1379 | 0.8308 | 0.9115 |
| 0.0749 | 4.7505 | 2190 | 0.8268 | -0.1379 | 0.8268 | 0.9093 |
| 0.0749 | 4.7549 | 2192 | 0.8551 | -0.1379 | 0.8551 | 0.9247 |
| 0.0749 | 4.7592 | 2194 | 0.8765 | -0.1379 | 0.8765 | 0.9362 |
| 0.0749 | 4.7636 | 2196 | 0.8778 | -0.1379 | 0.8778 | 0.9369 |
| 0.0749 | 4.7679 | 2198 | 0.8293 | -0.1379 | 0.8293 | 0.9107 |
| 0.0749 | 4.7722 | 2200 | 0.7634 | 0.0 | 0.7634 | 0.8737 |
| 0.0749 | 4.7766 | 2202 | 0.7595 | 0.0 | 0.7595 | 0.8715 |
| 0.0749 | 4.7809 | 2204 | 0.7848 | 0.0 | 0.7848 | 0.8859 |
| 0.0749 | 4.7852 | 2206 | 0.8576 | -0.1379 | 0.8576 | 0.9261 |
| 0.0749 | 4.7896 | 2208 | 0.9053 | -0.1379 | 0.9053 | 0.9515 |
| 0.0749 | 4.7939 | 2210 | 0.9536 | -0.1379 | 0.9536 | 0.9765 |
| 0.0749 | 4.7983 | 2212 | 0.9173 | -0.1379 | 0.9173 | 0.9578 |
| 0.0749 | 4.8026 | 2214 | 0.9220 | -0.1379 | 0.9220 | 0.9602 |
| 0.0749 | 4.8069 | 2216 | 0.9156 | -0.1379 | 0.9156 | 0.9569 |
| 0.0749 | 4.8113 | 2218 | 0.9176 | -0.1379 | 0.9176 | 0.9579 |
| 0.0749 | 4.8156 | 2220 | 0.8493 | -0.1379 | 0.8493 | 0.9216 |
| 0.0749 | 4.8200 | 2222 | 0.7936 | 0.0 | 0.7936 | 0.8908 |
| 0.0749 | 4.8243 | 2224 | 0.7835 | 0.0 | 0.7835 | 0.8851 |
| 0.0749 | 4.8286 | 2226 | 0.8121 | 0.2524 | 0.8121 | 0.9012 |
| 0.0749 | 4.8330 | 2228 | 0.8369 | -0.1379 | 0.8369 | 0.9148 |
| 0.0749 | 4.8373 | 2230 | 0.8660 | -0.1379 | 0.8660 | 0.9306 |
| 0.0749 | 4.8416 | 2232 | 0.9270 | -0.1085 | 0.9270 | 0.9628 |
| 0.0749 | 4.8460 | 2234 | 0.9109 | -0.1379 | 0.9109 | 0.9544 |
| 0.0749 | 4.8503 | 2236 | 0.8880 | -0.1379 | 0.8880 | 0.9423 |
| 0.0749 | 4.8547 | 2238 | 0.8122 | 0.2524 | 0.8122 | 0.9012 |
| 0.0749 | 4.8590 | 2240 | 0.7534 | 0.0 | 0.7534 | 0.8680 |
| 0.0749 | 4.8633 | 2242 | 0.7404 | 0.0 | 0.7404 | 0.8605 |
| 0.0749 | 4.8677 | 2244 | 0.7333 | 0.0 | 0.7333 | 0.8564 |
| 0.0749 | 4.8720 | 2246 | 0.7718 | 0.2667 | 0.7718 | 0.8785 |
| 0.0749 | 4.8764 | 2248 | 0.8739 | -0.1379 | 0.8739 | 0.9348 |
| 0.0749 | 4.8807 | 2250 | 0.9292 | -0.1085 | 0.9292 | 0.9640 |
| 0.0749 | 4.8850 | 2252 | 0.9005 | -0.1085 | 0.9005 | 0.9490 |
| 0.0749 | 4.8894 | 2254 | 0.8106 | 0.2524 | 0.8106 | 0.9003 |
| 0.0749 | 4.8937 | 2256 | 0.7218 | 0.0 | 0.7218 | 0.8496 |
| 0.0749 | 4.8980 | 2258 | 0.7031 | 0.0 | 0.7031 | 0.8385 |
| 0.0749 | 4.9024 | 2260 | 0.7134 | 0.0 | 0.7134 | 0.8446 |
| 0.0749 | 4.9067 | 2262 | 0.7523 | 0.0 | 0.7523 | 0.8673 |
| 0.0749 | 4.9111 | 2264 | 0.8425 | 0.2524 | 0.8425 | 0.9179 |
| 0.0749 | 4.9154 | 2266 | 0.9044 | -0.1085 | 0.9044 | 0.9510 |
| 0.0749 | 4.9197 | 2268 | 0.8834 | -0.1085 | 0.8834 | 0.9399 |
| 0.0749 | 4.9241 | 2270 | 0.8171 | 0.2524 | 0.8171 | 0.9039 |
| 0.0749 | 4.9284 | 2272 | 0.7627 | 0.2667 | 0.7627 | 0.8733 |
| 0.0749 | 4.9328 | 2274 | 0.7405 | 0.0 | 0.7405 | 0.8605 |
| 0.0749 | 4.9371 | 2276 | 0.7688 | 0.2667 | 0.7688 | 0.8768 |
| 0.0749 | 4.9414 | 2278 | 0.8221 | 0.2524 | 0.8221 | 0.9067 |
| 0.0749 | 4.9458 | 2280 | 0.8244 | 0.2524 | 0.8244 | 0.9080 |
| 0.0749 | 4.9501 | 2282 | 0.8273 | 0.2524 | 0.8273 | 0.9096 |
| 0.0749 | 4.9544 | 2284 | 0.7784 | 0.2667 | 0.7784 | 0.8823 |
| 0.0749 | 4.9588 | 2286 | 0.7708 | 0.0 | 0.7708 | 0.8779 |
| 0.0749 | 4.9631 | 2288 | 0.7650 | 0.0 | 0.7650 | 0.8747 |
| 0.0749 | 4.9675 | 2290 | 0.8055 | 0.2667 | 0.8055 | 0.8975 |
| 0.0749 | 4.9718 | 2292 | 0.8985 | -0.1379 | 0.8985 | 0.9479 |
| 0.0749 | 4.9761 | 2294 | 1.0486 | -0.1085 | 1.0486 | 1.0240 |
| 0.0749 | 4.9805 | 2296 | 1.0878 | -0.2394 | 1.0878 | 1.0430 |
| 0.0749 | 4.9848 | 2298 | 1.0169 | -0.1085 | 1.0169 | 1.0084 |
| 0.0749 | 4.9892 | 2300 | 0.9092 | -0.1379 | 0.9092 | 0.9535 |
| 0.0749 | 4.9935 | 2302 | 0.8250 | 0.0 | 0.8250 | 0.9083 |
| 0.0749 | 4.9978 | 2304 | 0.7943 | 0.0 | 0.7943 | 0.8912 |
| 0.0749 | 5.0022 | 2306 | 0.7901 | 0.0 | 0.7901 | 0.8889 |
| 0.0749 | 5.0065 | 2308 | 0.8078 | 0.0 | 0.8078 | 0.8988 |
| 0.0749 | 5.0108 | 2310 | 0.8465 | 0.0 | 0.8465 | 0.9200 |
| 0.0749 | 5.0152 | 2312 | 0.9124 | -0.1379 | 0.9124 | 0.9552 |
| 0.0749 | 5.0195 | 2314 | 0.9431 | -0.1085 | 0.9431 | 0.9711 |
| 0.0749 | 5.0239 | 2316 | 0.9250 | -0.1379 | 0.9250 | 0.9618 |
| 0.0749 | 5.0282 | 2318 | 0.8526 | 0.0 | 0.8526 | 0.9234 |
| 0.0749 | 5.0325 | 2320 | 0.8065 | 0.0 | 0.8065 | 0.8981 |
| 0.0749 | 5.0369 | 2322 | 0.8003 | 0.0 | 0.8003 | 0.8946 |
| 0.0749 | 5.0412 | 2324 | 0.8167 | 0.0 | 0.8167 | 0.9037 |
| 0.0749 | 5.0456 | 2326 | 0.8575 | -0.4667 | 0.8575 | 0.9260 |
| 0.0749 | 5.0499 | 2328 | 0.9105 | -0.1379 | 0.9105 | 0.9542 |
| 0.0749 | 5.0542 | 2330 | 0.9542 | -0.1085 | 0.9542 | 0.9768 |
| 0.0749 | 5.0586 | 2332 | 0.9581 | -0.1085 | 0.9581 | 0.9788 |
| 0.0749 | 5.0629 | 2334 | 0.9172 | -0.1379 | 0.9172 | 0.9577 |
| 0.0749 | 5.0672 | 2336 | 0.8471 | 0.0 | 0.8471 | 0.9204 |
| 0.0749 | 5.0716 | 2338 | 0.8036 | 0.0 | 0.8036 | 0.8964 |
| 0.0749 | 5.0759 | 2340 | 0.7954 | 0.0 | 0.7954 | 0.8918 |
| 0.0749 | 5.0803 | 2342 | 0.8032 | 0.0 | 0.8032 | 0.8962 |
| 0.0749 | 5.0846 | 2344 | 0.8203 | 0.0 | 0.8203 | 0.9057 |
| 0.0749 | 5.0889 | 2346 | 0.8682 | -0.4667 | 0.8682 | 0.9318 |
| 0.0749 | 5.0933 | 2348 | 0.9126 | -0.1379 | 0.9126 | 0.9553 |
| 0.0749 | 5.0976 | 2350 | 0.9256 | -0.1085 | 0.9256 | 0.9621 |
| 0.0749 | 5.1020 | 2352 | 0.9186 | -0.1085 | 0.9186 | 0.9584 |
| 0.0749 | 5.1063 | 2354 | 0.8728 | -0.3883 | 0.8728 | 0.9342 |
| 0.0749 | 5.1106 | 2356 | 0.8303 | 0.0 | 0.8303 | 0.9112 |
| 0.0749 | 5.1150 | 2358 | 0.8121 | 0.0 | 0.8121 | 0.9012 |
| 0.0749 | 5.1193 | 2360 | 0.7920 | 0.0 | 0.7920 | 0.8900 |
| 0.0749 | 5.1236 | 2362 | 0.7813 | 0.0 | 0.7813 | 0.8839 |
| 0.0749 | 5.1280 | 2364 | 0.7950 | 0.0 | 0.7950 | 0.8916 |
| 0.0749 | 5.1323 | 2366 | 0.8292 | 0.0 | 0.8292 | 0.9106 |
| 0.0749 | 5.1367 | 2368 | 0.8258 | 0.0 | 0.8258 | 0.9087 |
| 0.0749 | 5.1410 | 2370 | 0.8022 | 0.0 | 0.8022 | 0.8956 |
| 0.0749 | 5.1453 | 2372 | 0.8060 | 0.0 | 0.8060 | 0.8977 |
| 0.0749 | 5.1497 | 2374 | 0.8262 | 0.0 | 0.8262 | 0.9090 |
| 0.0749 | 5.1540 | 2376 | 0.8390 | 0.0 | 0.8390 | 0.9160 |
| 0.0749 | 5.1584 | 2378 | 0.8339 | 0.0 | 0.8339 | 0.9132 |
| 0.0749 | 5.1627 | 2380 | 0.8424 | 0.0 | 0.8424 | 0.9178 |
| 0.0749 | 5.1670 | 2382 | 0.8692 | -0.4667 | 0.8692 | 0.9323 |
| 0.0749 | 5.1714 | 2384 | 0.8807 | -0.4667 | 0.8807 | 0.9385 |
| 0.0749 | 5.1757 | 2386 | 0.8490 | 0.0 | 0.8490 | 0.9214 |
| 0.0749 | 5.1800 | 2388 | 0.8131 | 0.0 | 0.8131 | 0.9017 |
| 0.0749 | 5.1844 | 2390 | 0.7777 | 0.0 | 0.7777 | 0.8819 |
| 0.0749 | 5.1887 | 2392 | 0.7664 | 0.0 | 0.7664 | 0.8754 |
| 0.0749 | 5.1931 | 2394 | 0.7724 | 0.0 | 0.7724 | 0.8788 |
| 0.0749 | 5.1974 | 2396 | 0.8024 | 0.0 | 0.8024 | 0.8957 |
| 0.0749 | 5.2017 | 2398 | 0.8492 | -0.4667 | 0.8492 | 0.9215 |
| 0.0749 | 5.2061 | 2400 | 0.9115 | -0.1379 | 0.9115 | 0.9547 |
| 0.0749 | 5.2104 | 2402 | 0.9401 | -0.1085 | 0.9401 | 0.9696 |
| 0.0749 | 5.2148 | 2404 | 0.9074 | -0.1379 | 0.9074 | 0.9526 |
| 0.0749 | 5.2191 | 2406 | 0.8523 | 0.0 | 0.8523 | 0.9232 |
| 0.0749 | 5.2234 | 2408 | 0.8262 | 0.0 | 0.8262 | 0.9090 |
| 0.0749 | 5.2278 | 2410 | 0.8320 | -0.0233 | 0.8320 | 0.9121 |
| 0.0749 | 5.2321 | 2412 | 0.8386 | -0.0233 | 0.8386 | 0.9157 |
| 0.0749 | 5.2364 | 2414 | 0.8431 | 0.0 | 0.8431 | 0.9182 |
| 0.0749 | 5.2408 | 2416 | 0.8755 | -0.1748 | 0.8755 | 0.9357 |
| 0.0749 | 5.2451 | 2418 | 0.9175 | -0.1085 | 0.9175 | 0.9579 |
| 0.0749 | 5.2495 | 2420 | 0.9159 | -0.1085 | 0.9159 | 0.9570 |
| 0.0749 | 5.2538 | 2422 | 0.8861 | -0.1379 | 0.8861 | 0.9413 |
| 0.0749 | 5.2581 | 2424 | 0.8704 | -0.1379 | 0.8704 | 0.9329 |
| 0.0749 | 5.2625 | 2426 | 0.8527 | -0.4667 | 0.8527 | 0.9234 |
| 0.0749 | 5.2668 | 2428 | 0.8433 | 0.0 | 0.8433 | 0.9183 |
| 0.0749 | 5.2711 | 2430 | 0.8560 | -0.4667 | 0.8560 | 0.9252 |
| 0.0749 | 5.2755 | 2432 | 0.8542 | -0.4667 | 0.8542 | 0.9242 |
| 0.0749 | 5.2798 | 2434 | 0.8523 | 0.0 | 0.8523 | 0.9232 |
| 0.0749 | 5.2842 | 2436 | 0.8872 | -0.1379 | 0.8872 | 0.9419 |
| 0.0749 | 5.2885 | 2438 | 0.9346 | -0.1085 | 0.9346 | 0.9667 |
| 0.0749 | 5.2928 | 2440 | 0.9419 | -0.1085 | 0.9419 | 0.9705 |
| 0.0749 | 5.2972 | 2442 | 0.9381 | -0.1085 | 0.9381 | 0.9686 |
| 0.0749 | 5.3015 | 2444 | 0.9090 | -0.1379 | 0.9090 | 0.9534 |
| 0.0749 | 5.3059 | 2446 | 0.8478 | 0.0 | 0.8478 | 0.9208 |
| 0.0749 | 5.3102 | 2448 | 0.8273 | 0.0 | 0.8273 | 0.9096 |
| 0.0749 | 5.3145 | 2450 | 0.8254 | 0.0 | 0.8254 | 0.9085 |
| 0.0749 | 5.3189 | 2452 | 0.8299 | 0.0 | 0.8299 | 0.9110 |
| 0.0749 | 5.3232 | 2454 | 0.8416 | 0.0 | 0.8416 | 0.9174 |
| 0.0749 | 5.3275 | 2456 | 0.8750 | -0.1379 | 0.8750 | 0.9354 |
| 0.0749 | 5.3319 | 2458 | 0.9380 | -0.1085 | 0.9380 | 0.9685 |
| 0.0749 | 5.3362 | 2460 | 0.9491 | -0.1085 | 0.9491 | 0.9742 |
| 0.0749 | 5.3406 | 2462 | 0.9215 | -0.1085 | 0.9215 | 0.9600 |
| 0.0749 | 5.3449 | 2464 | 0.9176 | -0.1085 | 0.9176 | 0.9579 |
| 0.0749 | 5.3492 | 2466 | 0.9240 | -0.1085 | 0.9240 | 0.9613 |
| 0.0749 | 5.3536 | 2468 | 0.8953 | -0.1379 | 0.8953 | 0.9462 |
| 0.0749 | 5.3579 | 2470 | 0.8443 | 0.0 | 0.8443 | 0.9188 |
| 0.0749 | 5.3623 | 2472 | 0.8326 | -0.0233 | 0.8326 | 0.9125 |
| 0.0749 | 5.3666 | 2474 | 0.8310 | -0.0233 | 0.8310 | 0.9116 |
| 0.0749 | 5.3709 | 2476 | 0.8411 | 0.0 | 0.8411 | 0.9171 |
| 0.0749 | 5.3753 | 2478 | 0.8776 | -0.4667 | 0.8776 | 0.9368 |
| 0.0749 | 5.3796 | 2480 | 0.8980 | -0.1379 | 0.8980 | 0.9476 |
| 0.0749 | 5.3839 | 2482 | 0.8837 | -0.3883 | 0.8837 | 0.9401 |
| 0.0749 | 5.3883 | 2484 | 0.8535 | 0.0 | 0.8535 | 0.9239 |
| 0.0749 | 5.3926 | 2486 | 0.8349 | 0.0 | 0.8349 | 0.9137 |
| 0.0749 | 5.3970 | 2488 | 0.8126 | 0.0 | 0.8126 | 0.9014 |
| 0.0749 | 5.4013 | 2490 | 0.8181 | 0.0 | 0.8181 | 0.9045 |
| 0.0749 | 5.4056 | 2492 | 0.8478 | 0.0 | 0.8478 | 0.9207 |
| 0.0749 | 5.4100 | 2494 | 0.8897 | -0.4667 | 0.8897 | 0.9432 |
| 0.0749 | 5.4143 | 2496 | 0.9561 | -0.1085 | 0.9561 | 0.9778 |
| 0.0749 | 5.4187 | 2498 | 0.9837 | -0.1085 | 0.9837 | 0.9918 |
| 0.0629 | 5.4230 | 2500 | 0.9421 | -0.1085 | 0.9421 | 0.9706 |
| 0.0629 | 5.4273 | 2502 | 0.8618 | 0.0 | 0.8618 | 0.9283 |
| 0.0629 | 5.4317 | 2504 | 0.8278 | 0.0 | 0.8278 | 0.9098 |
| 0.0629 | 5.4360 | 2506 | 0.8192 | 0.0 | 0.8192 | 0.9051 |
| 0.0629 | 5.4403 | 2508 | 0.8240 | 0.0 | 0.8240 | 0.9077 |
| 0.0629 | 5.4447 | 2510 | 0.8460 | 0.0 | 0.8460 | 0.9198 |
| 0.0629 | 5.4490 | 2512 | 0.8909 | -0.4667 | 0.8909 | 0.9439 |
| 0.0629 | 5.4534 | 2514 | 0.9170 | -0.1085 | 0.9170 | 0.9576 |
| 0.0629 | 5.4577 | 2516 | 0.8912 | -0.4667 | 0.8912 | 0.9440 |
| 0.0629 | 5.4620 | 2518 | 0.8531 | 0.0 | 0.8531 | 0.9237 |
| 0.0629 | 5.4664 | 2520 | 0.8421 | 0.0 | 0.8421 | 0.9177 |
| 0.0629 | 5.4707 | 2522 | 0.8192 | 0.0 | 0.8192 | 0.9051 |
| 0.0629 | 5.4751 | 2524 | 0.8151 | -0.0233 | 0.8151 | 0.9028 |
| 0.0629 | 5.4794 | 2526 | 0.8195 | 0.0 | 0.8195 | 0.9052 |
| 0.0629 | 5.4837 | 2528 | 0.8485 | 0.0 | 0.8485 | 0.9211 |
| 0.0629 | 5.4881 | 2530 | 0.9112 | -0.1085 | 0.9112 | 0.9546 |
| 0.0629 | 5.4924 | 2532 | 0.9264 | -0.1085 | 0.9264 | 0.9625 |
| 0.0629 | 5.4967 | 2534 | 0.8884 | -0.3883 | 0.8884 | 0.9426 |
| 0.0629 | 5.5011 | 2536 | 0.8274 | 0.0 | 0.8274 | 0.9096 |
| 0.0629 | 5.5054 | 2538 | 0.7876 | 0.0 | 0.7876 | 0.8875 |
| 0.0629 | 5.5098 | 2540 | 0.7843 | 0.0 | 0.7843 | 0.8856 |
| 0.0629 | 5.5141 | 2542 | 0.7939 | 0.0 | 0.7939 | 0.8910 |
| 0.0629 | 5.5184 | 2544 | 0.8013 | 0.0 | 0.8013 | 0.8951 |
| 0.0629 | 5.5228 | 2546 | 0.8147 | 0.0 | 0.8147 | 0.9026 |
| 0.0629 | 5.5271 | 2548 | 0.8333 | 0.0 | 0.8333 | 0.9128 |
| 0.0629 | 5.5315 | 2550 | 0.8383 | 0.0 | 0.8383 | 0.9156 |
| 0.0629 | 5.5358 | 2552 | 0.8267 | 0.0 | 0.8267 | 0.9092 |
| 0.0629 | 5.5401 | 2554 | 0.8144 | -0.0233 | 0.8144 | 0.9025 |
| 0.0629 | 5.5445 | 2556 | 0.8139 | 0.0 | 0.8139 | 0.9022 |
| 0.0629 | 5.5488 | 2558 | 0.8180 | 0.0 | 0.8180 | 0.9044 |
| 0.0629 | 5.5531 | 2560 | 0.8258 | 0.0 | 0.8258 | 0.9087 |
| 0.0629 | 5.5575 | 2562 | 0.8019 | 0.0 | 0.8019 | 0.8955 |
| 0.0629 | 5.5618 | 2564 | 0.7814 | 0.0 | 0.7814 | 0.8840 |
| 0.0629 | 5.5662 | 2566 | 0.7578 | 0.0 | 0.7578 | 0.8705 |
| 0.0629 | 5.5705 | 2568 | 0.7489 | 0.0 | 0.7489 | 0.8654 |
| 0.0629 | 5.5748 | 2570 | 0.7374 | 0.0 | 0.7374 | 0.8587 |
| 0.0629 | 5.5792 | 2572 | 0.7373 | 0.0 | 0.7373 | 0.8586 |
| 0.0629 | 5.5835 | 2574 | 0.7491 | 0.0 | 0.7491 | 0.8655 |
| 0.0629 | 5.5879 | 2576 | 0.7702 | 0.0 | 0.7702 | 0.8776 |
| 0.0629 | 5.5922 | 2578 | 0.7961 | 0.0 | 0.7961 | 0.8922 |
| 0.0629 | 5.5965 | 2580 | 0.7802 | 0.0 | 0.7802 | 0.8833 |
| 0.0629 | 5.6009 | 2582 | 0.7691 | 0.0 | 0.7691 | 0.8770 |
| 0.0629 | 5.6052 | 2584 | 0.7518 | 0.0 | 0.7518 | 0.8671 |
| 0.0629 | 5.6095 | 2586 | 0.7238 | 0.0 | 0.7238 | 0.8508 |
| 0.0629 | 5.6139 | 2588 | 0.7189 | 0.0 | 0.7189 | 0.8479 |
| 0.0629 | 5.6182 | 2590 | 0.7239 | -0.0233 | 0.7239 | 0.8508 |
| 0.0629 | 5.6226 | 2592 | 0.7381 | 0.0 | 0.7381 | 0.8591 |
| 0.0629 | 5.6269 | 2594 | 0.7614 | 0.0 | 0.7614 | 0.8726 |
| 0.0629 | 5.6312 | 2596 | 0.7949 | 0.0222 | 0.7949 | 0.8916 |
| 0.0629 | 5.6356 | 2598 | 0.7997 | 0.0222 | 0.7997 | 0.8943 |
| 0.0629 | 5.6399 | 2600 | 0.8091 | 0.0222 | 0.8091 | 0.8995 |
| 0.0629 | 5.6443 | 2602 | 0.7974 | 0.0222 | 0.7974 | 0.8930 |
| 0.0629 | 5.6486 | 2604 | 0.7908 | 0.0222 | 0.7908 | 0.8893 |
| 0.0629 | 5.6529 | 2606 | 0.7625 | 0.0 | 0.7625 | 0.8732 |
| 0.0629 | 5.6573 | 2608 | 0.7481 | 0.0 | 0.7481 | 0.8649 |
| 0.0629 | 5.6616 | 2610 | 0.7586 | 0.0 | 0.7586 | 0.8710 |
| 0.0629 | 5.6659 | 2612 | 0.7984 | 0.0222 | 0.7984 | 0.8935 |
| 0.0629 | 5.6703 | 2614 | 0.8151 | 0.0222 | 0.8151 | 0.9028 |
| 0.0629 | 5.6746 | 2616 | 0.8224 | 0.0222 | 0.8224 | 0.9069 |
| 0.0629 | 5.6790 | 2618 | 0.8353 | 0.0222 | 0.8353 | 0.9139 |
| 0.0629 | 5.6833 | 2620 | 0.8763 | -0.1379 | 0.8763 | 0.9361 |
| 0.0629 | 5.6876 | 2622 | 0.9318 | -0.1379 | 0.9318 | 0.9653 |
| 0.0629 | 5.6920 | 2624 | 0.9401 | -0.1379 | 0.9401 | 0.9696 |
| 0.0629 | 5.6963 | 2626 | 0.8672 | -0.1379 | 0.8672 | 0.9312 |
| 0.0629 | 5.7007 | 2628 | 0.7759 | 0.0 | 0.7759 | 0.8809 |
| 0.0629 | 5.7050 | 2630 | 0.7331 | 0.0 | 0.7331 | 0.8562 |
| 0.0629 | 5.7093 | 2632 | 0.7297 | -0.0233 | 0.7297 | 0.8542 |
| 0.0629 | 5.7137 | 2634 | 0.7281 | -0.0233 | 0.7281 | 0.8533 |
| 0.0629 | 5.7180 | 2636 | 0.7270 | -0.0233 | 0.7270 | 0.8526 |
| 0.0629 | 5.7223 | 2638 | 0.7498 | 0.0 | 0.7498 | 0.8659 |
| 0.0629 | 5.7267 | 2640 | 0.7848 | 0.0222 | 0.7848 | 0.8859 |
| 0.0629 | 5.7310 | 2642 | 0.8355 | 0.2524 | 0.8355 | 0.9141 |
| 0.0629 | 5.7354 | 2644 | 0.8361 | 0.2524 | 0.8361 | 0.9144 |
| 0.0629 | 5.7397 | 2646 | 0.7922 | 0.0222 | 0.7922 | 0.8901 |
| 0.0629 | 5.7440 | 2648 | 0.7558 | 0.0 | 0.7558 | 0.8694 |
| 0.0629 | 5.7484 | 2650 | 0.7385 | 0.0 | 0.7385 | 0.8593 |
| 0.0629 | 5.7527 | 2652 | 0.7339 | 0.0 | 0.7339 | 0.8567 |
| 0.0629 | 5.7570 | 2654 | 0.7385 | 0.0 | 0.7385 | 0.8594 |
| 0.0629 | 5.7614 | 2656 | 0.7373 | 0.0 | 0.7373 | 0.8587 |
| 0.0629 | 5.7657 | 2658 | 0.7397 | 0.0 | 0.7397 | 0.8601 |
| 0.0629 | 5.7701 | 2660 | 0.7391 | -0.0233 | 0.7391 | 0.8597 |
| 0.0629 | 5.7744 | 2662 | 0.7391 | -0.0233 | 0.7391 | 0.8597 |
| 0.0629 | 5.7787 | 2664 | 0.7394 | 0.0 | 0.7394 | 0.8599 |
| 0.0629 | 5.7831 | 2666 | 0.7402 | 0.0 | 0.7402 | 0.8603 |
| 0.0629 | 5.7874 | 2668 | 0.7393 | 0.0 | 0.7393 | 0.8598 |
| 0.0629 | 5.7918 | 2670 | 0.7342 | -0.0233 | 0.7342 | 0.8569 |
| 0.0629 | 5.7961 | 2672 | 0.7354 | -0.0233 | 0.7354 | 0.8575 |
| 0.0629 | 5.8004 | 2674 | 0.7444 | -0.0421 | 0.7444 | 0.8628 |
| 0.0629 | 5.8048 | 2676 | 0.7306 | -0.0233 | 0.7306 | 0.8547 |
| 0.0629 | 5.8091 | 2678 | 0.7149 | 0.0 | 0.7149 | 0.8455 |
| 0.0629 | 5.8134 | 2680 | 0.7122 | 0.0 | 0.7122 | 0.8439 |
| 0.0629 | 5.8178 | 2682 | 0.7073 | 0.0 | 0.7073 | 0.8410 |
| 0.0629 | 5.8221 | 2684 | 0.7033 | 0.0 | 0.7033 | 0.8386 |
| 0.0629 | 5.8265 | 2686 | 0.6973 | 0.0 | 0.6973 | 0.8350 |
| 0.0629 | 5.8308 | 2688 | 0.6948 | 0.0 | 0.6948 | 0.8335 |
| 0.0629 | 5.8351 | 2690 | 0.6990 | 0.0 | 0.6990 | 0.8360 |
| 0.0629 | 5.8395 | 2692 | 0.7017 | 0.0 | 0.7017 | 0.8377 |
| 0.0629 | 5.8438 | 2694 | 0.7151 | 0.0 | 0.7151 | 0.8457 |
| 0.0629 | 5.8482 | 2696 | 0.7334 | 0.0 | 0.7334 | 0.8564 |
| 0.0629 | 5.8525 | 2698 | 0.7516 | 0.0222 | 0.7516 | 0.8669 |
| 0.0629 | 5.8568 | 2700 | 0.7713 | 0.2524 | 0.7713 | 0.8782 |
| 0.0629 | 5.8612 | 2702 | 0.7596 | 0.2524 | 0.7596 | 0.8715 |
| 0.0629 | 5.8655 | 2704 | 0.7293 | 0.0 | 0.7293 | 0.8540 |
| 0.0629 | 5.8698 | 2706 | 0.7182 | 0.0 | 0.7182 | 0.8475 |
| 0.0629 | 5.8742 | 2708 | 0.7209 | 0.0 | 0.7209 | 0.8490 |
| 0.0629 | 5.8785 | 2710 | 0.7319 | 0.0 | 0.7319 | 0.8555 |
| 0.0629 | 5.8829 | 2712 | 0.7663 | 0.2524 | 0.7663 | 0.8754 |
| 0.0629 | 5.8872 | 2714 | 0.8499 | -0.1379 | 0.8499 | 0.9219 |
| 0.0629 | 5.8915 | 2716 | 0.9109 | -0.1085 | 0.9109 | 0.9544 |
| 0.0629 | 5.8959 | 2718 | 0.9397 | -0.0845 | 0.9397 | 0.9694 |
| 0.0629 | 5.9002 | 2720 | 0.8874 | -0.1085 | 0.8874 | 0.9420 |
| 0.0629 | 5.9046 | 2722 | 0.8051 | 0.2524 | 0.8051 | 0.8973 |
| 0.0629 | 5.9089 | 2724 | 0.7606 | 0.0 | 0.7606 | 0.8721 |
| 0.0629 | 5.9132 | 2726 | 0.7363 | -0.0233 | 0.7363 | 0.8581 |
| 0.0629 | 5.9176 | 2728 | 0.7345 | 0.0 | 0.7345 | 0.8570 |
| 0.0629 | 5.9219 | 2730 | 0.7586 | 0.0 | 0.7586 | 0.8710 |
| 0.0629 | 5.9262 | 2732 | 0.8450 | -0.1379 | 0.8450 | 0.9193 |
| 0.0629 | 5.9306 | 2734 | 0.9369 | -0.0845 | 0.9369 | 0.9679 |
| 0.0629 | 5.9349 | 2736 | 0.9954 | -0.0845 | 0.9954 | 0.9977 |
| 0.0629 | 5.9393 | 2738 | 0.9932 | -0.0845 | 0.9932 | 0.9966 |
| 0.0629 | 5.9436 | 2740 | 0.9312 | -0.0845 | 0.9312 | 0.9650 |
| 0.0629 | 5.9479 | 2742 | 0.8416 | -0.1379 | 0.8416 | 0.9174 |
| 0.0629 | 5.9523 | 2744 | 0.7778 | 0.2524 | 0.7778 | 0.8819 |
| 0.0629 | 5.9566 | 2746 | 0.7591 | 0.0 | 0.7591 | 0.8713 |
| 0.0629 | 5.9610 | 2748 | 0.7761 | 0.0222 | 0.7761 | 0.8810 |
| 0.0629 | 5.9653 | 2750 | 0.8317 | -0.1379 | 0.8317 | 0.9120 |
| 0.0629 | 5.9696 | 2752 | 0.8522 | -0.1379 | 0.8522 | 0.9231 |
| 0.0629 | 5.9740 | 2754 | 0.8173 | -0.1379 | 0.8173 | 0.9040 |
| 0.0629 | 5.9783 | 2756 | 0.7749 | 0.0 | 0.7749 | 0.8803 |
| 0.0629 | 5.9826 | 2758 | 0.7574 | 0.0 | 0.7574 | 0.8703 |
| 0.0629 | 5.9870 | 2760 | 0.7685 | 0.0 | 0.7685 | 0.8767 |
| 0.0629 | 5.9913 | 2762 | 0.8131 | 0.2524 | 0.8131 | 0.9017 |
| 0.0629 | 5.9957 | 2764 | 0.8843 | -0.1379 | 0.8843 | 0.9404 |
| 0.0629 | 6.0 | 2766 | 0.8921 | -0.1379 | 0.8921 | 0.9445 |
| 0.0629 | 6.0043 | 2768 | 0.8550 | -0.1379 | 0.8550 | 0.9246 |
| 0.0629 | 6.0087 | 2770 | 0.8265 | 0.2524 | 0.8265 | 0.9091 |
| 0.0629 | 6.0130 | 2772 | 0.7734 | 0.0 | 0.7734 | 0.8794 |
| 0.0629 | 6.0174 | 2774 | 0.7362 | 0.0 | 0.7362 | 0.8580 |
| 0.0629 | 6.0217 | 2776 | 0.7197 | 0.0 | 0.7197 | 0.8484 |
| 0.0629 | 6.0260 | 2778 | 0.7163 | 0.0 | 0.7163 | 0.8464 |
| 0.0629 | 6.0304 | 2780 | 0.7178 | 0.0 | 0.7178 | 0.8473 |
| 0.0629 | 6.0347 | 2782 | 0.7279 | 0.0 | 0.7279 | 0.8532 |
| 0.0629 | 6.0390 | 2784 | 0.7505 | 0.0 | 0.7505 | 0.8663 |
| 0.0629 | 6.0434 | 2786 | 0.7557 | 0.0 | 0.7557 | 0.8693 |
| 0.0629 | 6.0477 | 2788 | 0.7582 | 0.0 | 0.7582 | 0.8708 |
| 0.0629 | 6.0521 | 2790 | 0.7787 | 0.0 | 0.7787 | 0.8824 |
| 0.0629 | 6.0564 | 2792 | 0.7873 | 0.0 | 0.7873 | 0.8873 |
| 0.0629 | 6.0607 | 2794 | 0.7785 | 0.0 | 0.7785 | 0.8823 |
| 0.0629 | 6.0651 | 2796 | 0.7598 | 0.0 | 0.7598 | 0.8716 |
| 0.0629 | 6.0694 | 2798 | 0.7432 | -0.0233 | 0.7432 | 0.8621 |
| 0.0629 | 6.0738 | 2800 | 0.7320 | -0.0233 | 0.7320 | 0.8556 |
| 0.0629 | 6.0781 | 2802 | 0.7277 | 0.0 | 0.7277 | 0.8530 |
| 0.0629 | 6.0824 | 2804 | 0.7512 | 0.0 | 0.7512 | 0.8667 |
| 0.0629 | 6.0868 | 2806 | 0.7886 | 0.0 | 0.7886 | 0.8881 |
| 0.0629 | 6.0911 | 2808 | 0.8145 | 0.0 | 0.8145 | 0.9025 |
| 0.0629 | 6.0954 | 2810 | 0.8390 | 0.0222 | 0.8390 | 0.9160 |
| 0.0629 | 6.0998 | 2812 | 0.8136 | 0.0 | 0.8136 | 0.9020 |
| 0.0629 | 6.1041 | 2814 | 0.7602 | 0.0 | 0.7602 | 0.8719 |
| 0.0629 | 6.1085 | 2816 | 0.7283 | 0.0 | 0.7283 | 0.8534 |
| 0.0629 | 6.1128 | 2818 | 0.7168 | 0.0 | 0.7168 | 0.8467 |
| 0.0629 | 6.1171 | 2820 | 0.7228 | 0.0 | 0.7228 | 0.8502 |
| 0.0629 | 6.1215 | 2822 | 0.7461 | 0.0 | 0.7461 | 0.8638 |
| 0.0629 | 6.1258 | 2824 | 0.7801 | 0.0 | 0.7801 | 0.8832 |
| 0.0629 | 6.1302 | 2826 | 0.8081 | 0.0 | 0.8081 | 0.8989 |
| 0.0629 | 6.1345 | 2828 | 0.8467 | -0.3276 | 0.8467 | 0.9202 |
| 0.0629 | 6.1388 | 2830 | 0.8354 | -0.3883 | 0.8354 | 0.9140 |
| 0.0629 | 6.1432 | 2832 | 0.7892 | 0.0 | 0.7892 | 0.8884 |
| 0.0629 | 6.1475 | 2834 | 0.7589 | 0.0 | 0.7589 | 0.8712 |
| 0.0629 | 6.1518 | 2836 | 0.7518 | -0.0233 | 0.7518 | 0.8671 |
| 0.0629 | 6.1562 | 2838 | 0.7578 | 0.0 | 0.7578 | 0.8705 |
| 0.0629 | 6.1605 | 2840 | 0.7802 | 0.0 | 0.7802 | 0.8833 |
| 0.0629 | 6.1649 | 2842 | 0.8273 | 0.0 | 0.8273 | 0.9095 |
| 0.0629 | 6.1692 | 2844 | 0.8612 | -0.3276 | 0.8612 | 0.9280 |
| 0.0629 | 6.1735 | 2846 | 0.8462 | -0.3276 | 0.8462 | 0.9199 |
| 0.0629 | 6.1779 | 2848 | 0.8283 | -0.4667 | 0.8283 | 0.9101 |
| 0.0629 | 6.1822 | 2850 | 0.8014 | 0.0 | 0.8014 | 0.8952 |
| 0.0629 | 6.1866 | 2852 | 0.7900 | 0.0 | 0.7900 | 0.8888 |
| 0.0629 | 6.1909 | 2854 | 0.7828 | 0.0 | 0.7828 | 0.8847 |
| 0.0629 | 6.1952 | 2856 | 0.8010 | 0.0 | 0.8010 | 0.8950 |
| 0.0629 | 6.1996 | 2858 | 0.8285 | -0.4667 | 0.8285 | 0.9102 |
| 0.0629 | 6.2039 | 2860 | 0.8727 | -0.1085 | 0.8727 | 0.9342 |
| 0.0629 | 6.2082 | 2862 | 0.9241 | -0.1085 | 0.9241 | 0.9613 |
| 0.0629 | 6.2126 | 2864 | 0.9377 | -0.1085 | 0.9377 | 0.9683 |
| 0.0629 | 6.2169 | 2866 | 0.9107 | -0.1085 | 0.9107 | 0.9543 |
| 0.0629 | 6.2213 | 2868 | 0.8894 | -0.1085 | 0.8894 | 0.9431 |
| 0.0629 | 6.2256 | 2870 | 0.8468 | -0.4667 | 0.8468 | 0.9202 |
| 0.0629 | 6.2299 | 2872 | 0.8050 | 0.0 | 0.8050 | 0.8972 |
| 0.0629 | 6.2343 | 2874 | 0.8026 | 0.0 | 0.8026 | 0.8959 |
| 0.0629 | 6.2386 | 2876 | 0.8253 | 0.0 | 0.8253 | 0.9084 |
| 0.0629 | 6.2430 | 2878 | 0.8491 | 0.0 | 0.8491 | 0.9215 |
| 0.0629 | 6.2473 | 2880 | 0.8653 | -0.3883 | 0.8653 | 0.9302 |
| 0.0629 | 6.2516 | 2882 | 0.8911 | -0.3276 | 0.8911 | 0.9440 |
| 0.0629 | 6.2560 | 2884 | 0.9252 | -0.3276 | 0.9252 | 0.9619 |
| 0.0629 | 6.2603 | 2886 | 0.9252 | -0.3276 | 0.9252 | 0.9619 |
| 0.0629 | 6.2646 | 2888 | 0.9211 | -0.3276 | 0.9211 | 0.9597 |
| 0.0629 | 6.2690 | 2890 | 0.8809 | -0.3883 | 0.8809 | 0.9386 |
| 0.0629 | 6.2733 | 2892 | 0.8277 | 0.0 | 0.8277 | 0.9098 |
| 0.0629 | 6.2777 | 2894 | 0.7953 | -0.0233 | 0.7953 | 0.8918 |
| 0.0629 | 6.2820 | 2896 | 0.7914 | -0.0233 | 0.7914 | 0.8896 |
| 0.0629 | 6.2863 | 2898 | 0.8009 | -0.0233 | 0.8009 | 0.8949 |
| 0.0629 | 6.2907 | 2900 | 0.8318 | 0.0 | 0.8318 | 0.9120 |
| 0.0629 | 6.2950 | 2902 | 0.8702 | 0.0388 | 0.8702 | 0.9329 |
| 0.0629 | 6.2993 | 2904 | 0.8943 | 0.0517 | 0.8943 | 0.9457 |
| 0.0629 | 6.3037 | 2906 | 0.9061 | 0.0517 | 0.9061 | 0.9519 |
| 0.0629 | 6.3080 | 2908 | 0.8751 | 0.0517 | 0.8751 | 0.9355 |
| 0.0629 | 6.3124 | 2910 | 0.8274 | 0.0 | 0.8274 | 0.9096 |
| 0.0629 | 6.3167 | 2912 | 0.8128 | 0.0 | 0.8128 | 0.9015 |
| 0.0629 | 6.3210 | 2914 | 0.7909 | 0.0 | 0.7909 | 0.8893 |
| 0.0629 | 6.3254 | 2916 | 0.7567 | 0.0 | 0.7567 | 0.8699 |
| 0.0629 | 6.3297 | 2918 | 0.7476 | 0.0 | 0.7476 | 0.8646 |
| 0.0629 | 6.3341 | 2920 | 0.7574 | 0.0 | 0.7574 | 0.8703 |
| 0.0629 | 6.3384 | 2922 | 0.7682 | 0.0 | 0.7682 | 0.8765 |
| 0.0629 | 6.3427 | 2924 | 0.8077 | 0.0 | 0.8077 | 0.8987 |
| 0.0629 | 6.3471 | 2926 | 0.8552 | 0.0388 | 0.8552 | 0.9248 |
| 0.0629 | 6.3514 | 2928 | 0.8571 | 0.0222 | 0.8571 | 0.9258 |
| 0.0629 | 6.3557 | 2930 | 0.8638 | 0.0388 | 0.8638 | 0.9294 |
| 0.0629 | 6.3601 | 2932 | 0.8435 | 0.0222 | 0.8435 | 0.9184 |
| 0.0629 | 6.3644 | 2934 | 0.8093 | 0.0 | 0.8093 | 0.8996 |
| 0.0629 | 6.3688 | 2936 | 0.8020 | 0.0 | 0.8020 | 0.8956 |
| 0.0629 | 6.3731 | 2938 | 0.8199 | 0.0 | 0.8199 | 0.9055 |
| 0.0629 | 6.3774 | 2940 | 0.8190 | 0.0 | 0.8190 | 0.9050 |
| 0.0629 | 6.3818 | 2942 | 0.8010 | 0.0 | 0.8010 | 0.8950 |
| 0.0629 | 6.3861 | 2944 | 0.7663 | 0.0 | 0.7663 | 0.8754 |
| 0.0629 | 6.3905 | 2946 | 0.7581 | 0.0 | 0.7581 | 0.8707 |
| 0.0629 | 6.3948 | 2948 | 0.7777 | 0.0 | 0.7777 | 0.8818 |
| 0.0629 | 6.3991 | 2950 | 0.8079 | 0.0 | 0.8079 | 0.8988 |
| 0.0629 | 6.4035 | 2952 | 0.8323 | 0.0 | 0.8323 | 0.9123 |
| 0.0629 | 6.4078 | 2954 | 0.8216 | 0.0 | 0.8216 | 0.9064 |
| 0.0629 | 6.4121 | 2956 | 0.8078 | 0.0 | 0.8078 | 0.8988 |
| 0.0629 | 6.4165 | 2958 | 0.8106 | 0.0 | 0.8106 | 0.9003 |
| 0.0629 | 6.4208 | 2960 | 0.8227 | 0.0 | 0.8227 | 0.9070 |
| 0.0629 | 6.4252 | 2962 | 0.8182 | 0.0 | 0.8182 | 0.9045 |
| 0.0629 | 6.4295 | 2964 | 0.7936 | 0.0 | 0.7936 | 0.8909 |
| 0.0629 | 6.4338 | 2966 | 0.7741 | 0.0 | 0.7741 | 0.8799 |
| 0.0629 | 6.4382 | 2968 | 0.7913 | 0.0 | 0.7913 | 0.8896 |
| 0.0629 | 6.4425 | 2970 | 0.8388 | 0.0222 | 0.8388 | 0.9159 |
| 0.0629 | 6.4469 | 2972 | 0.9116 | -0.0845 | 0.9116 | 0.9548 |
| 0.0629 | 6.4512 | 2974 | 0.9280 | -0.0845 | 0.9280 | 0.9634 |
| 0.0629 | 6.4555 | 2976 | 0.8872 | -0.0845 | 0.8872 | 0.9419 |
| 0.0629 | 6.4599 | 2978 | 0.8270 | 0.0 | 0.8270 | 0.9094 |
| 0.0629 | 6.4642 | 2980 | 0.7975 | 0.0 | 0.7975 | 0.8930 |
| 0.0629 | 6.4685 | 2982 | 0.7792 | 0.0 | 0.7792 | 0.8827 |
| 0.0629 | 6.4729 | 2984 | 0.7857 | 0.0 | 0.7857 | 0.8864 |
| 0.0629 | 6.4772 | 2986 | 0.8114 | 0.0 | 0.8114 | 0.9008 |
| 0.0629 | 6.4816 | 2988 | 0.8122 | 0.0 | 0.8122 | 0.9012 |
| 0.0629 | 6.4859 | 2990 | 0.8100 | 0.0 | 0.8100 | 0.9000 |
| 0.0629 | 6.4902 | 2992 | 0.8274 | 0.0 | 0.8274 | 0.9096 |
| 0.0629 | 6.4946 | 2994 | 0.8171 | 0.0 | 0.8171 | 0.9039 |
| 0.0629 | 6.4989 | 2996 | 0.7833 | 0.0 | 0.7833 | 0.8850 |
| 0.0629 | 6.5033 | 2998 | 0.7461 | 0.0 | 0.7461 | 0.8638 |
| 0.0564 | 6.5076 | 3000 | 0.7329 | -0.0233 | 0.7329 | 0.8561 |
| 0.0564 | 6.5119 | 3002 | 0.7294 | -0.0233 | 0.7294 | 0.8540 |
| 0.0564 | 6.5163 | 3004 | 0.7305 | -0.0233 | 0.7305 | 0.8547 |
| 0.0564 | 6.5206 | 3006 | 0.7321 | 0.0 | 0.7321 | 0.8556 |
| 0.0564 | 6.5249 | 3008 | 0.7373 | 0.0 | 0.7373 | 0.8587 |
| 0.0564 | 6.5293 | 3010 | 0.7599 | 0.0 | 0.7599 | 0.8717 |
| 0.0564 | 6.5336 | 3012 | 0.7652 | 0.0 | 0.7652 | 0.8747 |
| 0.0564 | 6.5380 | 3014 | 0.7658 | 0.0 | 0.7658 | 0.8751 |
| 0.0564 | 6.5423 | 3016 | 0.7833 | 0.0 | 0.7833 | 0.8851 |
| 0.0564 | 6.5466 | 3018 | 0.7918 | 0.0 | 0.7918 | 0.8899 |
| 0.0564 | 6.5510 | 3020 | 0.7888 | 0.0 | 0.7888 | 0.8881 |
| 0.0564 | 6.5553 | 3022 | 0.7715 | 0.0 | 0.7715 | 0.8783 |
| 0.0564 | 6.5597 | 3024 | 0.7611 | 0.0 | 0.7611 | 0.8724 |
| 0.0564 | 6.5640 | 3026 | 0.7521 | 0.0 | 0.7521 | 0.8673 |
| 0.0564 | 6.5683 | 3028 | 0.7529 | 0.0 | 0.7529 | 0.8677 |
| 0.0564 | 6.5727 | 3030 | 0.7582 | 0.0 | 0.7582 | 0.8708 |
| 0.0564 | 6.5770 | 3032 | 0.7803 | 0.0 | 0.7803 | 0.8834 |
| 0.0564 | 6.5813 | 3034 | 0.8377 | 0.0222 | 0.8377 | 0.9153 |
| 0.0564 | 6.5857 | 3036 | 0.8742 | -0.1379 | 0.8742 | 0.9350 |
| 0.0564 | 6.5900 | 3038 | 0.8550 | -0.1379 | 0.8550 | 0.9247 |
| 0.0564 | 6.5944 | 3040 | 0.8104 | 0.0 | 0.8104 | 0.9002 |
| 0.0564 | 6.5987 | 3042 | 0.7677 | 0.0 | 0.7677 | 0.8762 |
| 0.0564 | 6.6030 | 3044 | 0.7485 | 0.0 | 0.7485 | 0.8651 |
| 0.0564 | 6.6074 | 3046 | 0.7357 | -0.0233 | 0.7357 | 0.8577 |
| 0.0564 | 6.6117 | 3048 | 0.7313 | -0.0233 | 0.7313 | 0.8552 |
| 0.0564 | 6.6161 | 3050 | 0.7352 | -0.0233 | 0.7352 | 0.8575 |
| 0.0564 | 6.6204 | 3052 | 0.7540 | 0.0 | 0.7540 | 0.8683 |
| 0.0564 | 6.6247 | 3054 | 0.7777 | 0.0 | 0.7777 | 0.8819 |
| 0.0564 | 6.6291 | 3056 | 0.8006 | 0.0222 | 0.8006 | 0.8948 |
| 0.0564 | 6.6334 | 3058 | 0.8572 | -0.1379 | 0.8572 | 0.9259 |
| 0.0564 | 6.6377 | 3060 | 0.9006 | -0.0845 | 0.9006 | 0.9490 |
| 0.0564 | 6.6421 | 3062 | 0.8947 | -0.1085 | 0.8947 | 0.9459 |
| 0.0564 | 6.6464 | 3064 | 0.8420 | -0.1379 | 0.8420 | 0.9176 |
| 0.0564 | 6.6508 | 3066 | 0.8040 | 0.0 | 0.8040 | 0.8966 |
| 0.0564 | 6.6551 | 3068 | 0.7789 | -0.0233 | 0.7789 | 0.8825 |
| 0.0564 | 6.6594 | 3070 | 0.7668 | -0.0233 | 0.7668 | 0.8757 |
| 0.0564 | 6.6638 | 3072 | 0.7714 | -0.0233 | 0.7714 | 0.8783 |
| 0.0564 | 6.6681 | 3074 | 0.7777 | -0.0233 | 0.7777 | 0.8819 |
| 0.0564 | 6.6725 | 3076 | 0.8086 | -0.0233 | 0.8086 | 0.8992 |
| 0.0564 | 6.6768 | 3078 | 0.8367 | 0.0 | 0.8367 | 0.9147 |
| 0.0564 | 6.6811 | 3080 | 0.8587 | -0.1379 | 0.8587 | 0.9266 |
| 0.0564 | 6.6855 | 3082 | 0.8831 | -0.1379 | 0.8831 | 0.9397 |
| 0.0564 | 6.6898 | 3084 | 0.8677 | -0.1379 | 0.8677 | 0.9315 |
| 0.0564 | 6.6941 | 3086 | 0.8358 | 0.2143 | 0.8358 | 0.9142 |
| 0.0564 | 6.6985 | 3088 | 0.8112 | -0.0233 | 0.8112 | 0.9007 |
| 0.0564 | 6.7028 | 3090 | 0.8141 | -0.0233 | 0.8141 | 0.9022 |
| 0.0564 | 6.7072 | 3092 | 0.8437 | -0.1379 | 0.8437 | 0.9185 |
| 0.0564 | 6.7115 | 3094 | 0.8920 | -0.1085 | 0.8920 | 0.9445 |
| 0.0564 | 6.7158 | 3096 | 0.8987 | -0.1085 | 0.8987 | 0.9480 |
| 0.0564 | 6.7202 | 3098 | 0.8754 | -0.1379 | 0.8754 | 0.9356 |
| 0.0564 | 6.7245 | 3100 | 0.8371 | 0.2524 | 0.8371 | 0.9149 |
| 0.0564 | 6.7289 | 3102 | 0.7898 | -0.0233 | 0.7898 | 0.8887 |
| 0.0564 | 6.7332 | 3104 | 0.7744 | -0.0233 | 0.7744 | 0.8800 |
| 0.0564 | 6.7375 | 3106 | 0.7713 | -0.0233 | 0.7713 | 0.8782 |
| 0.0564 | 6.7419 | 3108 | 0.7816 | -0.0233 | 0.7816 | 0.8841 |
| 0.0564 | 6.7462 | 3110 | 0.8152 | 0.0 | 0.8152 | 0.9029 |
| 0.0564 | 6.7505 | 3112 | 0.8705 | -0.1379 | 0.8705 | 0.9330 |
| 0.0564 | 6.7549 | 3114 | 0.8868 | -0.1085 | 0.8868 | 0.9417 |
| 0.0564 | 6.7592 | 3116 | 0.8673 | -0.1379 | 0.8673 | 0.9313 |
| 0.0564 | 6.7636 | 3118 | 0.8180 | 0.0 | 0.8180 | 0.9045 |
| 0.0564 | 6.7679 | 3120 | 0.7760 | 0.0 | 0.7760 | 0.8809 |
| 0.0564 | 6.7722 | 3122 | 0.7648 | 0.0 | 0.7648 | 0.8746 |
| 0.0564 | 6.7766 | 3124 | 0.7703 | 0.0 | 0.7703 | 0.8777 |
| 0.0564 | 6.7809 | 3126 | 0.7727 | 0.0 | 0.7727 | 0.8791 |
| 0.0564 | 6.7852 | 3128 | 0.7878 | 0.0 | 0.7878 | 0.8876 |
| 0.0564 | 6.7896 | 3130 | 0.8187 | 0.0 | 0.8187 | 0.9048 |
| 0.0564 | 6.7939 | 3132 | 0.8485 | -0.1379 | 0.8485 | 0.9212 |
| 0.0564 | 6.7983 | 3134 | 0.8827 | -0.1085 | 0.8827 | 0.9395 |
| 0.0564 | 6.8026 | 3136 | 0.8638 | -0.1085 | 0.8638 | 0.9294 |
| 0.0564 | 6.8069 | 3138 | 0.8096 | 0.2667 | 0.8096 | 0.8998 |
| 0.0564 | 6.8113 | 3140 | 0.7777 | 0.0 | 0.7777 | 0.8819 |
| 0.0564 | 6.8156 | 3142 | 0.7490 | -0.0233 | 0.7490 | 0.8655 |
| 0.0564 | 6.8200 | 3144 | 0.7454 | -0.0233 | 0.7454 | 0.8634 |
| 0.0564 | 6.8243 | 3146 | 0.7558 | 0.0 | 0.7558 | 0.8694 |
| 0.0564 | 6.8286 | 3148 | 0.7597 | 0.0 | 0.7597 | 0.8716 |
| 0.0564 | 6.8330 | 3150 | 0.7806 | 0.0 | 0.7806 | 0.8835 |
| 0.0564 | 6.8373 | 3152 | 0.8214 | 0.2524 | 0.8214 | 0.9063 |
| 0.0564 | 6.8416 | 3154 | 0.8310 | -0.1379 | 0.8310 | 0.9116 |
| 0.0564 | 6.8460 | 3156 | 0.8085 | 0.2667 | 0.8085 | 0.8992 |
| 0.0564 | 6.8503 | 3158 | 0.7638 | 0.0 | 0.7638 | 0.8740 |
| 0.0564 | 6.8547 | 3160 | 0.7272 | 0.0 | 0.7272 | 0.8528 |
| 0.0564 | 6.8590 | 3162 | 0.7169 | 0.0 | 0.7169 | 0.8467 |
| 0.0564 | 6.8633 | 3164 | 0.7208 | 0.0 | 0.7208 | 0.8490 |
| 0.0564 | 6.8677 | 3166 | 0.7428 | 0.0 | 0.7428 | 0.8618 |
| 0.0564 | 6.8720 | 3168 | 0.7932 | 0.0 | 0.7932 | 0.8906 |
| 0.0564 | 6.8764 | 3170 | 0.8466 | -0.1085 | 0.8466 | 0.9201 |
| 0.0564 | 6.8807 | 3172 | 0.8513 | -0.1085 | 0.8513 | 0.9227 |
| 0.0564 | 6.8850 | 3174 | 0.8157 | 0.2524 | 0.8157 | 0.9032 |
| 0.0564 | 6.8894 | 3176 | 0.7620 | 0.0 | 0.7620 | 0.8729 |
| 0.0564 | 6.8937 | 3178 | 0.7237 | 0.0 | 0.7237 | 0.8507 |
| 0.0564 | 6.8980 | 3180 | 0.7193 | 0.0 | 0.7193 | 0.8481 |
| 0.0564 | 6.9024 | 3182 | 0.7203 | 0.0 | 0.7203 | 0.8487 |
| 0.0564 | 6.9067 | 3184 | 0.7322 | 0.0 | 0.7322 | 0.8557 |
| 0.0564 | 6.9111 | 3186 | 0.7628 | 0.0 | 0.7628 | 0.8734 |
| 0.0564 | 6.9154 | 3188 | 0.8307 | -0.3276 | 0.8307 | 0.9114 |
| 0.0564 | 6.9197 | 3190 | 0.9205 | -0.0845 | 0.9205 | 0.9594 |
| 0.0564 | 6.9241 | 3192 | 0.9620 | -0.0845 | 0.9620 | 0.9808 |
| 0.0564 | 6.9284 | 3194 | 0.9527 | -0.0845 | 0.9527 | 0.9761 |
| 0.0564 | 6.9328 | 3196 | 0.9168 | -0.1085 | 0.9168 | 0.9575 |
| 0.0564 | 6.9371 | 3198 | 0.8669 | -0.1085 | 0.8669 | 0.9311 |
| 0.0564 | 6.9414 | 3200 | 0.8436 | -0.3883 | 0.8436 | 0.9185 |
| 0.0564 | 6.9458 | 3202 | 0.8410 | -0.4667 | 0.8410 | 0.9171 |
| 0.0564 | 6.9501 | 3204 | 0.8571 | -0.3883 | 0.8571 | 0.9258 |
| 0.0564 | 6.9544 | 3206 | 0.8687 | -0.3883 | 0.8687 | 0.9320 |
| 0.0564 | 6.9588 | 3208 | 0.8801 | -0.3276 | 0.8801 | 0.9382 |
| 0.0564 | 6.9631 | 3210 | 0.9093 | -0.1085 | 0.9093 | 0.9536 |
| 0.0564 | 6.9675 | 3212 | 0.9348 | -0.1085 | 0.9348 | 0.9669 |
| 0.0564 | 6.9718 | 3214 | 0.9299 | -0.1085 | 0.9299 | 0.9643 |
| 0.0564 | 6.9761 | 3216 | 0.8893 | -0.3276 | 0.8893 | 0.9430 |
| 0.0564 | 6.9805 | 3218 | 0.8480 | -0.4667 | 0.8480 | 0.9209 |
| 0.0564 | 6.9848 | 3220 | 0.8358 | 0.0 | 0.8358 | 0.9142 |
| 0.0564 | 6.9892 | 3222 | 0.8539 | -0.4667 | 0.8539 | 0.9240 |
| 0.0564 | 6.9935 | 3224 | 0.8522 | -0.4667 | 0.8522 | 0.9232 |
| 0.0564 | 6.9978 | 3226 | 0.8610 | -0.4667 | 0.8610 | 0.9279 |
| 0.0564 | 7.0022 | 3228 | 0.8810 | -0.3276 | 0.8810 | 0.9386 |
| 0.0564 | 7.0065 | 3230 | 0.8850 | -0.3276 | 0.8850 | 0.9407 |
| 0.0564 | 7.0108 | 3232 | 0.8829 | -0.3276 | 0.8829 | 0.9396 |
| 0.0564 | 7.0152 | 3234 | 0.8610 | -0.4667 | 0.8610 | 0.9279 |
| 0.0564 | 7.0195 | 3236 | 0.8532 | 0.0 | 0.8532 | 0.9237 |
| 0.0564 | 7.0239 | 3238 | 0.8535 | 0.0 | 0.8535 | 0.9239 |
| 0.0564 | 7.0282 | 3240 | 0.8753 | -0.3883 | 0.8753 | 0.9356 |
| 0.0564 | 7.0325 | 3242 | 0.8913 | -0.3276 | 0.8913 | 0.9441 |
| 0.0564 | 7.0369 | 3244 | 0.8889 | -0.3883 | 0.8889 | 0.9428 |
| 0.0564 | 7.0412 | 3246 | 0.8717 | -0.4667 | 0.8717 | 0.9337 |
| 0.0564 | 7.0456 | 3248 | 0.8551 | 0.0 | 0.8551 | 0.9247 |
| 0.0564 | 7.0499 | 3250 | 0.8630 | -0.4667 | 0.8630 | 0.9290 |
| 0.0564 | 7.0542 | 3252 | 0.8649 | -0.4667 | 0.8649 | 0.9300 |
| 0.0564 | 7.0586 | 3254 | 0.8680 | -0.4667 | 0.8680 | 0.9317 |
| 0.0564 | 7.0629 | 3256 | 0.8483 | 0.0 | 0.8483 | 0.9210 |
| 0.0564 | 7.0672 | 3258 | 0.8298 | 0.0 | 0.8298 | 0.9109 |
| 0.0564 | 7.0716 | 3260 | 0.8259 | 0.0 | 0.8259 | 0.9088 |
| 0.0564 | 7.0759 | 3262 | 0.8490 | 0.0 | 0.8490 | 0.9214 |
| 0.0564 | 7.0803 | 3264 | 0.8960 | -0.3276 | 0.8960 | 0.9466 |
| 0.0564 | 7.0846 | 3266 | 0.9225 | -0.1085 | 0.9225 | 0.9604 |
| 0.0564 | 7.0889 | 3268 | 0.9116 | -0.1085 | 0.9116 | 0.9548 |
| 0.0564 | 7.0933 | 3270 | 0.8758 | -0.4667 | 0.8758 | 0.9359 |
| 0.0564 | 7.0976 | 3272 | 0.8521 | 0.0 | 0.8521 | 0.9231 |
| 0.0564 | 7.1020 | 3274 | 0.8448 | 0.0 | 0.8448 | 0.9191 |
| 0.0564 | 7.1063 | 3276 | 0.8498 | 0.0 | 0.8498 | 0.9218 |
| 0.0564 | 7.1106 | 3278 | 0.8833 | -0.3276 | 0.8833 | 0.9398 |
| 0.0564 | 7.1150 | 3280 | 0.9088 | -0.1085 | 0.9088 | 0.9533 |
| 0.0564 | 7.1193 | 3282 | 0.9274 | -0.1085 | 0.9274 | 0.9630 |
| 0.0564 | 7.1236 | 3284 | 0.9034 | -0.1085 | 0.9034 | 0.9505 |
| 0.0564 | 7.1280 | 3286 | 0.8671 | -0.3883 | 0.8671 | 0.9312 |
| 0.0564 | 7.1323 | 3288 | 0.8532 | -0.4667 | 0.8532 | 0.9237 |
| 0.0564 | 7.1367 | 3290 | 0.8525 | -0.4667 | 0.8525 | 0.9233 |
| 0.0564 | 7.1410 | 3292 | 0.8283 | 0.0 | 0.8283 | 0.9101 |
| 0.0564 | 7.1453 | 3294 | 0.8048 | 0.0 | 0.8048 | 0.8971 |
| 0.0564 | 7.1497 | 3296 | 0.8069 | 0.0 | 0.8069 | 0.8983 |
| 0.0564 | 7.1540 | 3298 | 0.8200 | 0.0 | 0.8200 | 0.9055 |
| 0.0564 | 7.1584 | 3300 | 0.8260 | 0.0 | 0.8260 | 0.9088 |
| 0.0564 | 7.1627 | 3302 | 0.8554 | -0.1085 | 0.8554 | 0.9249 |
| 0.0564 | 7.1670 | 3304 | 0.8754 | -0.1085 | 0.8754 | 0.9356 |
| 0.0564 | 7.1714 | 3306 | 0.8672 | -0.1085 | 0.8672 | 0.9313 |
| 0.0564 | 7.1757 | 3308 | 0.8274 | 0.0 | 0.8274 | 0.9096 |
| 0.0564 | 7.1800 | 3310 | 0.8044 | 0.0 | 0.8044 | 0.8969 |
| 0.0564 | 7.1844 | 3312 | 0.7903 | 0.0 | 0.7903 | 0.8890 |
| 0.0564 | 7.1887 | 3314 | 0.7991 | 0.0 | 0.7991 | 0.8939 |
| 0.0564 | 7.1931 | 3316 | 0.8370 | 0.2524 | 0.8370 | 0.9149 |
| 0.0564 | 7.1974 | 3318 | 0.8869 | -0.1085 | 0.8869 | 0.9418 |
| 0.0564 | 7.2017 | 3320 | 0.9156 | -0.1085 | 0.9156 | 0.9569 |
| 0.0564 | 7.2061 | 3322 | 0.9120 | -0.1085 | 0.9120 | 0.9550 |
| 0.0564 | 7.2104 | 3324 | 0.8748 | -0.1085 | 0.8748 | 0.9353 |
| 0.0564 | 7.2148 | 3326 | 0.8307 | 0.0 | 0.8307 | 0.9115 |
| 0.0564 | 7.2191 | 3328 | 0.7993 | 0.0 | 0.7993 | 0.8940 |
| 0.0564 | 7.2234 | 3330 | 0.8002 | 0.0 | 0.8002 | 0.8945 |
| 0.0564 | 7.2278 | 3332 | 0.8090 | 0.0 | 0.8090 | 0.8994 |
| 0.0564 | 7.2321 | 3334 | 0.8493 | -0.1085 | 0.8493 | 0.9216 |
| 0.0564 | 7.2364 | 3336 | 0.9280 | -0.1085 | 0.9280 | 0.9633 |
| 0.0564 | 7.2408 | 3338 | 0.9843 | -0.0845 | 0.9843 | 0.9921 |
| 0.0564 | 7.2451 | 3340 | 0.9883 | -0.0845 | 0.9883 | 0.9941 |
| 0.0564 | 7.2495 | 3342 | 0.9515 | -0.1085 | 0.9515 | 0.9755 |
| 0.0564 | 7.2538 | 3344 | 0.8959 | -0.1085 | 0.8959 | 0.9465 |
| 0.0564 | 7.2581 | 3346 | 0.8637 | -0.1085 | 0.8637 | 0.9294 |
| 0.0564 | 7.2625 | 3348 | 0.8709 | -0.1085 | 0.8709 | 0.9332 |
| 0.0564 | 7.2668 | 3350 | 0.8944 | -0.1085 | 0.8944 | 0.9457 |
| 0.0564 | 7.2711 | 3352 | 0.9198 | -0.1085 | 0.9198 | 0.9590 |
| 0.0564 | 7.2755 | 3354 | 0.9251 | -0.1085 | 0.9251 | 0.9618 |
| 0.0564 | 7.2798 | 3356 | 0.8942 | -0.1085 | 0.8942 | 0.9456 |
| 0.0564 | 7.2842 | 3358 | 0.8825 | -0.1085 | 0.8825 | 0.9394 |
| 0.0564 | 7.2885 | 3360 | 0.8585 | -0.1085 | 0.8585 | 0.9265 |
| 0.0564 | 7.2928 | 3362 | 0.8346 | 0.0 | 0.8346 | 0.9136 |
| 0.0564 | 7.2972 | 3364 | 0.8257 | 0.0 | 0.8257 | 0.9087 |
| 0.0564 | 7.3015 | 3366 | 0.8350 | 0.0 | 0.8350 | 0.9138 |
| 0.0564 | 7.3059 | 3368 | 0.8496 | 0.0222 | 0.8496 | 0.9217 |
| 0.0564 | 7.3102 | 3370 | 0.8795 | -0.1085 | 0.8795 | 0.9378 |
| 0.0564 | 7.3145 | 3372 | 0.9157 | -0.1085 | 0.9157 | 0.9569 |
| 0.0564 | 7.3189 | 3374 | 0.9265 | -0.1085 | 0.9265 | 0.9625 |
| 0.0564 | 7.3232 | 3376 | 0.9145 | -0.1085 | 0.9145 | 0.9563 |
| 0.0564 | 7.3275 | 3378 | 0.9072 | -0.1085 | 0.9072 | 0.9525 |
| 0.0564 | 7.3319 | 3380 | 0.8780 | -0.1085 | 0.8780 | 0.9370 |
| 0.0564 | 7.3362 | 3382 | 0.8546 | 0.0222 | 0.8546 | 0.9245 |
| 0.0564 | 7.3406 | 3384 | 0.8462 | 0.0 | 0.8462 | 0.9199 |
| 0.0564 | 7.3449 | 3386 | 0.8476 | 0.0 | 0.8476 | 0.9206 |
| 0.0564 | 7.3492 | 3388 | 0.8638 | -0.1085 | 0.8638 | 0.9294 |
| 0.0564 | 7.3536 | 3390 | 0.8482 | 0.0 | 0.8482 | 0.9210 |
| 0.0564 | 7.3579 | 3392 | 0.8328 | 0.0 | 0.8328 | 0.9126 |
| 0.0564 | 7.3623 | 3394 | 0.8251 | 0.0 | 0.8251 | 0.9084 |
| 0.0564 | 7.3666 | 3396 | 0.8426 | 0.0 | 0.8426 | 0.9180 |
| 0.0564 | 7.3709 | 3398 | 0.8738 | -0.1085 | 0.8738 | 0.9348 |
| 0.0564 | 7.3753 | 3400 | 0.8957 | -0.1085 | 0.8957 | 0.9464 |
| 0.0564 | 7.3796 | 3402 | 0.8932 | -0.1085 | 0.8932 | 0.9451 |
| 0.0564 | 7.3839 | 3404 | 0.8720 | -0.1085 | 0.8720 | 0.9338 |
| 0.0564 | 7.3883 | 3406 | 0.8446 | 0.0 | 0.8446 | 0.9190 |
| 0.0564 | 7.3926 | 3408 | 0.8341 | 0.0 | 0.8341 | 0.9133 |
| 0.0564 | 7.3970 | 3410 | 0.8260 | 0.0 | 0.8260 | 0.9088 |
| 0.0564 | 7.4013 | 3412 | 0.8331 | 0.0 | 0.8331 | 0.9128 |
| 0.0564 | 7.4056 | 3414 | 0.8629 | -0.4667 | 0.8629 | 0.9289 |
| 0.0564 | 7.4100 | 3416 | 0.8807 | -0.1085 | 0.8807 | 0.9385 |
| 0.0564 | 7.4143 | 3418 | 0.8859 | -0.1085 | 0.8859 | 0.9412 |
| 0.0564 | 7.4187 | 3420 | 0.8748 | -0.3276 | 0.8748 | 0.9353 |
| 0.0564 | 7.4230 | 3422 | 0.8751 | -0.3276 | 0.8751 | 0.9355 |
| 0.0564 | 7.4273 | 3424 | 0.8738 | -0.3276 | 0.8738 | 0.9347 |
| 0.0564 | 7.4317 | 3426 | 0.8628 | -0.4667 | 0.8628 | 0.9289 |
| 0.0564 | 7.4360 | 3428 | 0.8481 | 0.0 | 0.8481 | 0.9209 |
| 0.0564 | 7.4403 | 3430 | 0.8510 | 0.0 | 0.8510 | 0.9225 |
| 0.0564 | 7.4447 | 3432 | 0.8567 | -0.4667 | 0.8567 | 0.9256 |
| 0.0564 | 7.4490 | 3434 | 0.8719 | -0.3883 | 0.8719 | 0.9338 |
| 0.0564 | 7.4534 | 3436 | 0.8905 | -0.1085 | 0.8905 | 0.9436 |
| 0.0564 | 7.4577 | 3438 | 0.8879 | -0.1085 | 0.8879 | 0.9423 |
| 0.0564 | 7.4620 | 3440 | 0.8687 | -0.3883 | 0.8687 | 0.9320 |
| 0.0564 | 7.4664 | 3442 | 0.8479 | 0.0 | 0.8479 | 0.9208 |
| 0.0564 | 7.4707 | 3444 | 0.8332 | 0.0 | 0.8332 | 0.9128 |
| 0.0564 | 7.4751 | 3446 | 0.8213 | 0.0 | 0.8213 | 0.9062 |
| 0.0564 | 7.4794 | 3448 | 0.8287 | 0.0 | 0.8287 | 0.9103 |
| 0.0564 | 7.4837 | 3450 | 0.8382 | 0.0 | 0.8382 | 0.9155 |
| 0.0564 | 7.4881 | 3452 | 0.8284 | 0.0 | 0.8284 | 0.9101 |
| 0.0564 | 7.4924 | 3454 | 0.8277 | 0.0 | 0.8277 | 0.9098 |
| 0.0564 | 7.4967 | 3456 | 0.8426 | 0.0 | 0.8426 | 0.9179 |
| 0.0564 | 7.5011 | 3458 | 0.8657 | -0.3883 | 0.8657 | 0.9304 |
| 0.0564 | 7.5054 | 3460 | 0.8771 | -0.1379 | 0.8771 | 0.9365 |
| 0.0564 | 7.5098 | 3462 | 0.8860 | -0.1085 | 0.8860 | 0.9413 |
| 0.0564 | 7.5141 | 3464 | 0.8863 | -0.1085 | 0.8863 | 0.9414 |
| 0.0564 | 7.5184 | 3466 | 0.8725 | -0.1379 | 0.8725 | 0.9341 |
| 0.0564 | 7.5228 | 3468 | 0.8470 | 0.0 | 0.8470 | 0.9203 |
| 0.0564 | 7.5271 | 3470 | 0.8248 | 0.0 | 0.8248 | 0.9082 |
| 0.0564 | 7.5315 | 3472 | 0.8167 | 0.0 | 0.8167 | 0.9037 |
| 0.0564 | 7.5358 | 3474 | 0.8039 | 0.0 | 0.8039 | 0.8966 |
| 0.0564 | 7.5401 | 3476 | 0.8000 | 0.0 | 0.8000 | 0.8945 |
| 0.0564 | 7.5445 | 3478 | 0.8161 | 0.0 | 0.8161 | 0.9034 |
| 0.0564 | 7.5488 | 3480 | 0.8198 | 0.0 | 0.8198 | 0.9054 |
| 0.0564 | 7.5531 | 3482 | 0.8178 | 0.0 | 0.8178 | 0.9043 |
| 0.0564 | 7.5575 | 3484 | 0.8021 | 0.0 | 0.8021 | 0.8956 |
| 0.0564 | 7.5618 | 3486 | 0.7966 | 0.0 | 0.7966 | 0.8925 |
| 0.0564 | 7.5662 | 3488 | 0.7989 | 0.0 | 0.7989 | 0.8938 |
| 0.0564 | 7.5705 | 3490 | 0.8126 | 0.0 | 0.8126 | 0.9014 |
| 0.0564 | 7.5748 | 3492 | 0.8335 | 0.0 | 0.8335 | 0.9130 |
| 0.0564 | 7.5792 | 3494 | 0.8500 | -0.1379 | 0.8500 | 0.9220 |
| 0.0564 | 7.5835 | 3496 | 0.8659 | -0.1379 | 0.8659 | 0.9305 |
| 0.0564 | 7.5879 | 3498 | 0.8690 | -0.1379 | 0.8690 | 0.9322 |
| 0.0481 | 7.5922 | 3500 | 0.8805 | -0.1085 | 0.8805 | 0.9384 |
| 0.0481 | 7.5965 | 3502 | 0.8903 | -0.1085 | 0.8903 | 0.9436 |
| 0.0481 | 7.6009 | 3504 | 0.8657 | -0.1379 | 0.8657 | 0.9304 |
| 0.0481 | 7.6052 | 3506 | 0.8285 | 0.0 | 0.8285 | 0.9102 |
| 0.0481 | 7.6095 | 3508 | 0.7895 | 0.0 | 0.7895 | 0.8885 |
| 0.0481 | 7.6139 | 3510 | 0.7707 | 0.0 | 0.7707 | 0.8779 |
| 0.0481 | 7.6182 | 3512 | 0.7691 | 0.0 | 0.7691 | 0.8770 |
| 0.0481 | 7.6226 | 3514 | 0.7755 | 0.0 | 0.7755 | 0.8806 |
| 0.0481 | 7.6269 | 3516 | 0.7794 | 0.0 | 0.7794 | 0.8828 |
| 0.0481 | 7.6312 | 3518 | 0.7921 | 0.0 | 0.7921 | 0.8900 |
| 0.0481 | 7.6356 | 3520 | 0.8085 | 0.0 | 0.8085 | 0.8992 |
| 0.0481 | 7.6399 | 3522 | 0.8354 | 0.0 | 0.8354 | 0.9140 |
| 0.0481 | 7.6443 | 3524 | 0.8534 | 0.2524 | 0.8534 | 0.9238 |
| 0.0481 | 7.6486 | 3526 | 0.8547 | 0.2524 | 0.8547 | 0.9245 |
| 0.0481 | 7.6529 | 3528 | 0.8623 | -0.1379 | 0.8623 | 0.9286 |
| 0.0481 | 7.6573 | 3530 | 0.8790 | -0.1085 | 0.8790 | 0.9375 |
| 0.0481 | 7.6616 | 3532 | 0.8870 | -0.1085 | 0.8870 | 0.9418 |
| 0.0481 | 7.6659 | 3534 | 0.8879 | -0.1085 | 0.8879 | 0.9423 |
| 0.0481 | 7.6703 | 3536 | 0.8824 | -0.1085 | 0.8824 | 0.9394 |
| 0.0481 | 7.6746 | 3538 | 0.8676 | -0.1379 | 0.8676 | 0.9315 |
| 0.0481 | 7.6790 | 3540 | 0.8784 | -0.1085 | 0.8784 | 0.9372 |
| 0.0481 | 7.6833 | 3542 | 0.8821 | -0.1085 | 0.8821 | 0.9392 |
| 0.0481 | 7.6876 | 3544 | 0.9022 | -0.1085 | 0.9022 | 0.9499 |
| 0.0481 | 7.6920 | 3546 | 0.8882 | -0.1085 | 0.8882 | 0.9425 |
| 0.0481 | 7.6963 | 3548 | 0.8900 | -0.1085 | 0.8900 | 0.9434 |
| 0.0481 | 7.7007 | 3550 | 0.8822 | -0.1085 | 0.8822 | 0.9392 |
| 0.0481 | 7.7050 | 3552 | 0.8594 | -0.3883 | 0.8594 | 0.9270 |
| 0.0481 | 7.7093 | 3554 | 0.8259 | 0.0 | 0.8259 | 0.9088 |
| 0.0481 | 7.7137 | 3556 | 0.7969 | 0.0 | 0.7969 | 0.8927 |
| 0.0481 | 7.7180 | 3558 | 0.7942 | 0.0 | 0.7942 | 0.8912 |
| 0.0481 | 7.7223 | 3560 | 0.8133 | 0.0 | 0.8133 | 0.9018 |
| 0.0481 | 7.7267 | 3562 | 0.8264 | 0.0 | 0.8264 | 0.9091 |
| 0.0481 | 7.7310 | 3564 | 0.8263 | 0.0 | 0.8263 | 0.9090 |
| 0.0481 | 7.7354 | 3566 | 0.8532 | -0.4667 | 0.8532 | 0.9237 |
| 0.0481 | 7.7397 | 3568 | 0.8713 | -0.1379 | 0.8713 | 0.9334 |
| 0.0481 | 7.7440 | 3570 | 0.8714 | -0.1379 | 0.8714 | 0.9335 |
| 0.0481 | 7.7484 | 3572 | 0.8769 | -0.1379 | 0.8769 | 0.9364 |
| 0.0481 | 7.7527 | 3574 | 0.8686 | -0.1379 | 0.8686 | 0.9320 |
| 0.0481 | 7.7570 | 3576 | 0.8566 | -0.1379 | 0.8566 | 0.9255 |
| 0.0481 | 7.7614 | 3578 | 0.8216 | 0.0 | 0.8216 | 0.9064 |
| 0.0481 | 7.7657 | 3580 | 0.7926 | 0.0 | 0.7926 | 0.8903 |
| 0.0481 | 7.7701 | 3582 | 0.7782 | -0.0233 | 0.7782 | 0.8822 |
| 0.0481 | 7.7744 | 3584 | 0.7805 | -0.0233 | 0.7805 | 0.8834 |
| 0.0481 | 7.7787 | 3586 | 0.7954 | -0.0233 | 0.7954 | 0.8919 |
| 0.0481 | 7.7831 | 3588 | 0.8110 | 0.0 | 0.8110 | 0.9006 |
| 0.0481 | 7.7874 | 3590 | 0.8355 | 0.0 | 0.8355 | 0.9141 |
| 0.0481 | 7.7918 | 3592 | 0.8535 | -0.4667 | 0.8535 | 0.9238 |
| 0.0481 | 7.7961 | 3594 | 0.8800 | -0.1379 | 0.8800 | 0.9381 |
| 0.0481 | 7.8004 | 3596 | 0.8961 | -0.1085 | 0.8961 | 0.9466 |
| 0.0481 | 7.8048 | 3598 | 0.9048 | -0.1085 | 0.9048 | 0.9512 |
| 0.0481 | 7.8091 | 3600 | 0.9023 | -0.1085 | 0.9023 | 0.9499 |
| 0.0481 | 7.8134 | 3602 | 0.8970 | -0.1085 | 0.8970 | 0.9471 |
| 0.0481 | 7.8178 | 3604 | 0.8874 | -0.1379 | 0.8874 | 0.9420 |
| 0.0481 | 7.8221 | 3606 | 0.8570 | -0.1748 | 0.8570 | 0.9258 |
| 0.0481 | 7.8265 | 3608 | 0.8220 | 0.0 | 0.8220 | 0.9067 |
| 0.0481 | 7.8308 | 3610 | 0.7959 | 0.0 | 0.7959 | 0.8921 |
| 0.0481 | 7.8351 | 3612 | 0.7913 | 0.0 | 0.7913 | 0.8895 |
| 0.0481 | 7.8395 | 3614 | 0.8020 | 0.0 | 0.8020 | 0.8956 |
| 0.0481 | 7.8438 | 3616 | 0.8200 | 0.0 | 0.8200 | 0.9056 |
| 0.0481 | 7.8482 | 3618 | 0.8398 | -0.4667 | 0.8398 | 0.9164 |
| 0.0481 | 7.8525 | 3620 | 0.8452 | -0.4667 | 0.8452 | 0.9194 |
| 0.0481 | 7.8568 | 3622 | 0.8487 | -0.4667 | 0.8487 | 0.9212 |
| 0.0481 | 7.8612 | 3624 | 0.8579 | -0.1379 | 0.8579 | 0.9262 |
| 0.0481 | 7.8655 | 3626 | 0.8546 | -0.1379 | 0.8546 | 0.9244 |
| 0.0481 | 7.8698 | 3628 | 0.8491 | -0.4667 | 0.8491 | 0.9215 |
| 0.0481 | 7.8742 | 3630 | 0.8563 | -0.1379 | 0.8563 | 0.9254 |
| 0.0481 | 7.8785 | 3632 | 0.8661 | -0.1379 | 0.8661 | 0.9307 |
| 0.0481 | 7.8829 | 3634 | 0.8550 | -0.4667 | 0.8550 | 0.9246 |
| 0.0481 | 7.8872 | 3636 | 0.8390 | -0.4667 | 0.8390 | 0.9160 |
| 0.0481 | 7.8915 | 3638 | 0.8377 | -0.4667 | 0.8377 | 0.9153 |
| 0.0481 | 7.8959 | 3640 | 0.8404 | -0.4667 | 0.8404 | 0.9167 |
| 0.0481 | 7.9002 | 3642 | 0.8286 | 0.0 | 0.8286 | 0.9103 |
| 0.0481 | 7.9046 | 3644 | 0.8113 | 0.0 | 0.8113 | 0.9007 |
| 0.0481 | 7.9089 | 3646 | 0.8076 | -0.0233 | 0.8076 | 0.8987 |
| 0.0481 | 7.9132 | 3648 | 0.8128 | 0.0 | 0.8128 | 0.9016 |
| 0.0481 | 7.9176 | 3650 | 0.8333 | -0.4667 | 0.8333 | 0.9129 |
| 0.0481 | 7.9219 | 3652 | 0.8461 | -0.4667 | 0.8461 | 0.9199 |
| 0.0481 | 7.9262 | 3654 | 0.8551 | -0.1748 | 0.8551 | 0.9247 |
| 0.0481 | 7.9306 | 3656 | 0.8694 | -0.1379 | 0.8694 | 0.9324 |
| 0.0481 | 7.9349 | 3658 | 0.8590 | -0.1379 | 0.8590 | 0.9268 |
| 0.0481 | 7.9393 | 3660 | 0.8413 | -0.1748 | 0.8413 | 0.9172 |
| 0.0481 | 7.9436 | 3662 | 0.8133 | 0.0 | 0.8133 | 0.9018 |
| 0.0481 | 7.9479 | 3664 | 0.7993 | 0.0 | 0.7993 | 0.8941 |
| 0.0481 | 7.9523 | 3666 | 0.7851 | 0.0 | 0.7851 | 0.8860 |
| 0.0481 | 7.9566 | 3668 | 0.7875 | 0.0 | 0.7875 | 0.8874 |
| 0.0481 | 7.9610 | 3670 | 0.8047 | 0.0 | 0.8047 | 0.8971 |
| 0.0481 | 7.9653 | 3672 | 0.8286 | -0.4667 | 0.8286 | 0.9103 |
| 0.0481 | 7.9696 | 3674 | 0.8485 | -0.1748 | 0.8485 | 0.9212 |
| 0.0481 | 7.9740 | 3676 | 0.8649 | -0.1379 | 0.8649 | 0.9300 |
| 0.0481 | 7.9783 | 3678 | 0.9069 | -0.1085 | 0.9069 | 0.9523 |
| 0.0481 | 7.9826 | 3680 | 0.9317 | -0.1085 | 0.9317 | 0.9652 |
| 0.0481 | 7.9870 | 3682 | 0.9332 | -0.1085 | 0.9332 | 0.9660 |
| 0.0481 | 7.9913 | 3684 | 0.9108 | -0.1085 | 0.9108 | 0.9543 |
| 0.0481 | 7.9957 | 3686 | 0.8783 | -0.1379 | 0.8783 | 0.9372 |
| 0.0481 | 8.0 | 3688 | 0.8434 | -0.1748 | 0.8434 | 0.9184 |
| 0.0481 | 8.0043 | 3690 | 0.8177 | -0.4667 | 0.8177 | 0.9043 |
| 0.0481 | 8.0087 | 3692 | 0.8068 | 0.0 | 0.8068 | 0.8982 |
| 0.0481 | 8.0130 | 3694 | 0.8115 | 0.0 | 0.8115 | 0.9008 |
| 0.0481 | 8.0174 | 3696 | 0.8122 | 0.0 | 0.8122 | 0.9012 |
| 0.0481 | 8.0217 | 3698 | 0.8117 | 0.0 | 0.8117 | 0.9009 |
| 0.0481 | 8.0260 | 3700 | 0.7959 | 0.0 | 0.7959 | 0.8922 |
| 0.0481 | 8.0304 | 3702 | 0.7827 | 0.0 | 0.7827 | 0.8847 |
| 0.0481 | 8.0347 | 3704 | 0.7726 | 0.0 | 0.7726 | 0.8790 |
| 0.0481 | 8.0390 | 3706 | 0.7690 | 0.0 | 0.7690 | 0.8769 |
| 0.0481 | 8.0434 | 3708 | 0.7769 | 0.0 | 0.7769 | 0.8814 |
| 0.0481 | 8.0477 | 3710 | 0.7987 | 0.0 | 0.7987 | 0.8937 |
| 0.0481 | 8.0521 | 3712 | 0.8373 | -0.1748 | 0.8373 | 0.9150 |
| 0.0481 | 8.0564 | 3714 | 0.8658 | -0.1379 | 0.8658 | 0.9305 |
| 0.0481 | 8.0607 | 3716 | 0.8735 | -0.1379 | 0.8735 | 0.9346 |
| 0.0481 | 8.0651 | 3718 | 0.8575 | -0.1379 | 0.8575 | 0.9260 |
| 0.0481 | 8.0694 | 3720 | 0.8439 | -0.1748 | 0.8439 | 0.9186 |
| 0.0481 | 8.0738 | 3722 | 0.8176 | 0.0 | 0.8176 | 0.9042 |
| 0.0481 | 8.0781 | 3724 | 0.7962 | 0.0 | 0.7962 | 0.8923 |
| 0.0481 | 8.0824 | 3726 | 0.7796 | 0.0 | 0.7796 | 0.8830 |
| 0.0481 | 8.0868 | 3728 | 0.7775 | 0.0 | 0.7775 | 0.8817 |
| 0.0481 | 8.0911 | 3730 | 0.7895 | 0.0 | 0.7895 | 0.8886 |
| 0.0481 | 8.0954 | 3732 | 0.8038 | 0.0 | 0.8038 | 0.8965 |
| 0.0481 | 8.0998 | 3734 | 0.8100 | 0.0 | 0.8100 | 0.9000 |
| 0.0481 | 8.1041 | 3736 | 0.8169 | -0.4667 | 0.8169 | 0.9038 |
| 0.0481 | 8.1085 | 3738 | 0.8056 | 0.0 | 0.8056 | 0.8975 |
| 0.0481 | 8.1128 | 3740 | 0.7899 | 0.0 | 0.7899 | 0.8888 |
| 0.0481 | 8.1171 | 3742 | 0.7878 | 0.0 | 0.7878 | 0.8876 |
| 0.0481 | 8.1215 | 3744 | 0.7877 | 0.0 | 0.7877 | 0.8875 |
| 0.0481 | 8.1258 | 3746 | 0.8001 | 0.0 | 0.8001 | 0.8945 |
| 0.0481 | 8.1302 | 3748 | 0.8258 | 0.0 | 0.8258 | 0.9087 |
| 0.0481 | 8.1345 | 3750 | 0.8657 | -0.1748 | 0.8657 | 0.9304 |
| 0.0481 | 8.1388 | 3752 | 0.8926 | -0.1085 | 0.8926 | 0.9448 |
| 0.0481 | 8.1432 | 3754 | 0.8992 | -0.1085 | 0.8992 | 0.9483 |
| 0.0481 | 8.1475 | 3756 | 0.9076 | -0.1085 | 0.9076 | 0.9527 |
| 0.0481 | 8.1518 | 3758 | 0.9148 | -0.1085 | 0.9148 | 0.9564 |
| 0.0481 | 8.1562 | 3760 | 0.9269 | -0.1085 | 0.9269 | 0.9628 |
| 0.0481 | 8.1605 | 3762 | 0.9222 | -0.1085 | 0.9222 | 0.9603 |
| 0.0481 | 8.1649 | 3764 | 0.8979 | -0.1085 | 0.8979 | 0.9476 |
| 0.0481 | 8.1692 | 3766 | 0.8650 | -0.1748 | 0.8650 | 0.9301 |
| 0.0481 | 8.1735 | 3768 | 0.8303 | -0.4667 | 0.8303 | 0.9112 |
| 0.0481 | 8.1779 | 3770 | 0.8074 | 0.0 | 0.8074 | 0.8985 |
| 0.0481 | 8.1822 | 3772 | 0.7965 | 0.0 | 0.7965 | 0.8924 |
| 0.0481 | 8.1866 | 3774 | 0.7996 | 0.0 | 0.7996 | 0.8942 |
| 0.0481 | 8.1909 | 3776 | 0.8149 | 0.0 | 0.8149 | 0.9027 |
| 0.0481 | 8.1952 | 3778 | 0.8354 | -0.4667 | 0.8354 | 0.9140 |
| 0.0481 | 8.1996 | 3780 | 0.8393 | -0.4667 | 0.8393 | 0.9162 |
| 0.0481 | 8.2039 | 3782 | 0.8274 | 0.0 | 0.8274 | 0.9096 |
| 0.0481 | 8.2082 | 3784 | 0.8220 | 0.0 | 0.8220 | 0.9067 |
| 0.0481 | 8.2126 | 3786 | 0.8229 | 0.0 | 0.8229 | 0.9071 |
| 0.0481 | 8.2169 | 3788 | 0.8321 | -0.4667 | 0.8321 | 0.9122 |
| 0.0481 | 8.2213 | 3790 | 0.8390 | -0.4667 | 0.8390 | 0.9160 |
| 0.0481 | 8.2256 | 3792 | 0.8452 | -0.4667 | 0.8452 | 0.9194 |
| 0.0481 | 8.2299 | 3794 | 0.8586 | -0.1748 | 0.8586 | 0.9266 |
| 0.0481 | 8.2343 | 3796 | 0.8648 | -0.1379 | 0.8648 | 0.9300 |
| 0.0481 | 8.2386 | 3798 | 0.8660 | -0.1379 | 0.8660 | 0.9306 |
| 0.0481 | 8.2430 | 3800 | 0.8771 | -0.1379 | 0.8771 | 0.9365 |
| 0.0481 | 8.2473 | 3802 | 0.8804 | -0.1085 | 0.8804 | 0.9383 |
| 0.0481 | 8.2516 | 3804 | 0.8736 | -0.1379 | 0.8736 | 0.9347 |
| 0.0481 | 8.2560 | 3806 | 0.8549 | -0.3883 | 0.8549 | 0.9246 |
| 0.0481 | 8.2603 | 3808 | 0.8266 | 0.0 | 0.8266 | 0.9092 |
| 0.0481 | 8.2646 | 3810 | 0.8024 | 0.0 | 0.8024 | 0.8958 |
| 0.0481 | 8.2690 | 3812 | 0.7933 | 0.0 | 0.7933 | 0.8907 |
| 0.0481 | 8.2733 | 3814 | 0.7951 | 0.0 | 0.7951 | 0.8917 |
| 0.0481 | 8.2777 | 3816 | 0.8111 | 0.0 | 0.8111 | 0.9006 |
| 0.0481 | 8.2820 | 3818 | 0.8377 | 0.0 | 0.8377 | 0.9153 |
| 0.0481 | 8.2863 | 3820 | 0.8703 | -0.1379 | 0.8703 | 0.9329 |
| 0.0481 | 8.2907 | 3822 | 0.9033 | -0.1085 | 0.9033 | 0.9504 |
| 0.0481 | 8.2950 | 3824 | 0.9256 | -0.1085 | 0.9256 | 0.9621 |
| 0.0481 | 8.2993 | 3826 | 0.9208 | -0.1085 | 0.9208 | 0.9596 |
| 0.0481 | 8.3037 | 3828 | 0.8964 | -0.1085 | 0.8964 | 0.9468 |
| 0.0481 | 8.3080 | 3830 | 0.8607 | -0.1379 | 0.8607 | 0.9278 |
| 0.0481 | 8.3124 | 3832 | 0.8233 | 0.0 | 0.8233 | 0.9074 |
| 0.0481 | 8.3167 | 3834 | 0.8049 | 0.0 | 0.8049 | 0.8972 |
| 0.0481 | 8.3210 | 3836 | 0.8040 | 0.0 | 0.8040 | 0.8967 |
| 0.0481 | 8.3254 | 3838 | 0.8130 | 0.0 | 0.8130 | 0.9016 |
| 0.0481 | 8.3297 | 3840 | 0.8232 | 0.0 | 0.8232 | 0.9073 |
| 0.0481 | 8.3341 | 3842 | 0.8334 | 0.0 | 0.8334 | 0.9129 |
| 0.0481 | 8.3384 | 3844 | 0.8563 | -0.1748 | 0.8563 | 0.9253 |
| 0.0481 | 8.3427 | 3846 | 0.8765 | -0.1379 | 0.8765 | 0.9362 |
| 0.0481 | 8.3471 | 3848 | 0.8977 | -0.1085 | 0.8977 | 0.9475 |
| 0.0481 | 8.3514 | 3850 | 0.9002 | -0.1085 | 0.9002 | 0.9488 |
| 0.0481 | 8.3557 | 3852 | 0.8837 | -0.1379 | 0.8837 | 0.9401 |
| 0.0481 | 8.3601 | 3854 | 0.8665 | -0.1379 | 0.8665 | 0.9309 |
| 0.0481 | 8.3644 | 3856 | 0.8459 | -0.1748 | 0.8459 | 0.9197 |
| 0.0481 | 8.3688 | 3858 | 0.8324 | 0.0 | 0.8324 | 0.9123 |
| 0.0481 | 8.3731 | 3860 | 0.8181 | 0.0 | 0.8181 | 0.9045 |
| 0.0481 | 8.3774 | 3862 | 0.8217 | 0.0 | 0.8217 | 0.9065 |
| 0.0481 | 8.3818 | 3864 | 0.8386 | -0.4667 | 0.8386 | 0.9158 |
| 0.0481 | 8.3861 | 3866 | 0.8499 | -0.1748 | 0.8499 | 0.9219 |
| 0.0481 | 8.3905 | 3868 | 0.8642 | -0.1379 | 0.8642 | 0.9296 |
| 0.0481 | 8.3948 | 3870 | 0.8644 | -0.1748 | 0.8644 | 0.9297 |
| 0.0481 | 8.3991 | 3872 | 0.8527 | -0.1748 | 0.8527 | 0.9234 |
| 0.0481 | 8.4035 | 3874 | 0.8287 | 0.0 | 0.8287 | 0.9103 |
| 0.0481 | 8.4078 | 3876 | 0.8119 | 0.0 | 0.8119 | 0.9011 |
| 0.0481 | 8.4121 | 3878 | 0.8047 | 0.0 | 0.8047 | 0.8971 |
| 0.0481 | 8.4165 | 3880 | 0.8074 | 0.0 | 0.8074 | 0.8986 |
| 0.0481 | 8.4208 | 3882 | 0.8165 | 0.0 | 0.8165 | 0.9036 |
| 0.0481 | 8.4252 | 3884 | 0.8336 | 0.0 | 0.8336 | 0.9130 |
| 0.0481 | 8.4295 | 3886 | 0.8460 | -0.4667 | 0.8460 | 0.9198 |
| 0.0481 | 8.4338 | 3888 | 0.8659 | -0.1748 | 0.8659 | 0.9305 |
| 0.0481 | 8.4382 | 3890 | 0.8715 | -0.1379 | 0.8715 | 0.9336 |
| 0.0481 | 8.4425 | 3892 | 0.8657 | -0.1748 | 0.8657 | 0.9304 |
| 0.0481 | 8.4469 | 3894 | 0.8621 | -0.1748 | 0.8621 | 0.9285 |
| 0.0481 | 8.4512 | 3896 | 0.8659 | -0.1748 | 0.8659 | 0.9305 |
| 0.0481 | 8.4555 | 3898 | 0.8681 | -0.1748 | 0.8681 | 0.9317 |
| 0.0481 | 8.4599 | 3900 | 0.8652 | -0.1748 | 0.8652 | 0.9301 |
| 0.0481 | 8.4642 | 3902 | 0.8620 | -0.1748 | 0.8620 | 0.9285 |
| 0.0481 | 8.4685 | 3904 | 0.8646 | -0.1748 | 0.8646 | 0.9298 |
| 0.0481 | 8.4729 | 3906 | 0.8542 | -0.4667 | 0.8542 | 0.9242 |
| 0.0481 | 8.4772 | 3908 | 0.8345 | 0.0 | 0.8345 | 0.9135 |
| 0.0481 | 8.4816 | 3910 | 0.8181 | 0.0 | 0.8181 | 0.9045 |
| 0.0481 | 8.4859 | 3912 | 0.8089 | 0.0 | 0.8089 | 0.8994 |
| 0.0481 | 8.4902 | 3914 | 0.8131 | 0.0 | 0.8131 | 0.9017 |
| 0.0481 | 8.4946 | 3916 | 0.8276 | 0.0 | 0.8276 | 0.9097 |
| 0.0481 | 8.4989 | 3918 | 0.8443 | -0.4667 | 0.8443 | 0.9189 |
| 0.0481 | 8.5033 | 3920 | 0.8716 | -0.1379 | 0.8716 | 0.9336 |
| 0.0481 | 8.5076 | 3922 | 0.8941 | -0.1085 | 0.8941 | 0.9456 |
| 0.0481 | 8.5119 | 3924 | 0.9018 | -0.1085 | 0.9018 | 0.9496 |
| 0.0481 | 8.5163 | 3926 | 0.8891 | -0.1085 | 0.8891 | 0.9429 |
| 0.0481 | 8.5206 | 3928 | 0.8806 | -0.1085 | 0.8806 | 0.9384 |
| 0.0481 | 8.5249 | 3930 | 0.8677 | -0.1379 | 0.8677 | 0.9315 |
| 0.0481 | 8.5293 | 3932 | 0.8549 | -0.1748 | 0.8549 | 0.9246 |
| 0.0481 | 8.5336 | 3934 | 0.8384 | -0.4667 | 0.8384 | 0.9156 |
| 0.0481 | 8.5380 | 3936 | 0.8256 | 0.0 | 0.8256 | 0.9086 |
| 0.0481 | 8.5423 | 3938 | 0.8137 | 0.0 | 0.8137 | 0.9020 |
| 0.0481 | 8.5466 | 3940 | 0.8102 | 0.0 | 0.8102 | 0.9001 |
| 0.0481 | 8.5510 | 3942 | 0.8078 | 0.0 | 0.8078 | 0.8988 |
| 0.0481 | 8.5553 | 3944 | 0.8061 | 0.0 | 0.8061 | 0.8978 |
| 0.0481 | 8.5597 | 3946 | 0.8096 | 0.0 | 0.8096 | 0.8998 |
| 0.0481 | 8.5640 | 3948 | 0.8249 | -0.4667 | 0.8249 | 0.9082 |
| 0.0481 | 8.5683 | 3950 | 0.8417 | -0.1748 | 0.8417 | 0.9175 |
| 0.0481 | 8.5727 | 3952 | 0.8702 | -0.1379 | 0.8702 | 0.9328 |
| 0.0481 | 8.5770 | 3954 | 0.8882 | -0.1085 | 0.8882 | 0.9424 |
| 0.0481 | 8.5813 | 3956 | 0.8951 | -0.1085 | 0.8951 | 0.9461 |
| 0.0481 | 8.5857 | 3958 | 0.8916 | -0.1085 | 0.8916 | 0.9443 |
| 0.0481 | 8.5900 | 3960 | 0.8842 | -0.1085 | 0.8842 | 0.9403 |
| 0.0481 | 8.5944 | 3962 | 0.8744 | -0.1085 | 0.8744 | 0.9351 |
| 0.0481 | 8.5987 | 3964 | 0.8639 | -0.1379 | 0.8639 | 0.9294 |
| 0.0481 | 8.6030 | 3966 | 0.8456 | -0.1748 | 0.8456 | 0.9196 |
| 0.0481 | 8.6074 | 3968 | 0.8202 | 0.0 | 0.8202 | 0.9056 |
| 0.0481 | 8.6117 | 3970 | 0.7957 | 0.0 | 0.7957 | 0.8920 |
| 0.0481 | 8.6161 | 3972 | 0.7889 | 0.0 | 0.7889 | 0.8882 |
| 0.0481 | 8.6204 | 3974 | 0.7940 | 0.0 | 0.7940 | 0.8911 |
| 0.0481 | 8.6247 | 3976 | 0.7962 | 0.0 | 0.7962 | 0.8923 |
| 0.0481 | 8.6291 | 3978 | 0.7892 | 0.0 | 0.7892 | 0.8884 |
| 0.0481 | 8.6334 | 3980 | 0.7881 | 0.0 | 0.7881 | 0.8877 |
| 0.0481 | 8.6377 | 3982 | 0.7899 | 0.0 | 0.7899 | 0.8888 |
| 0.0481 | 8.6421 | 3984 | 0.7974 | 0.0 | 0.7974 | 0.8930 |
| 0.0481 | 8.6464 | 3986 | 0.8016 | 0.0 | 0.8016 | 0.8953 |
| 0.0481 | 8.6508 | 3988 | 0.8152 | 0.0 | 0.8152 | 0.9029 |
| 0.0481 | 8.6551 | 3990 | 0.8299 | 0.0 | 0.8299 | 0.9110 |
| 0.0481 | 8.6594 | 3992 | 0.8492 | -0.1379 | 0.8492 | 0.9215 |
| 0.0481 | 8.6638 | 3994 | 0.8585 | -0.1379 | 0.8585 | 0.9266 |
| 0.0481 | 8.6681 | 3996 | 0.8551 | -0.1379 | 0.8551 | 0.9247 |
| 0.0481 | 8.6725 | 3998 | 0.8441 | -0.1379 | 0.8441 | 0.9187 |
| 0.0428 | 8.6768 | 4000 | 0.8308 | 0.0 | 0.8308 | 0.9115 |
| 0.0428 | 8.6811 | 4002 | 0.8173 | 0.0 | 0.8173 | 0.9041 |
| 0.0428 | 8.6855 | 4004 | 0.8039 | 0.0 | 0.8039 | 0.8966 |
| 0.0428 | 8.6898 | 4006 | 0.7906 | 0.0 | 0.7906 | 0.8891 |
| 0.0428 | 8.6941 | 4008 | 0.7841 | 0.0 | 0.7841 | 0.8855 |
| 0.0428 | 8.6985 | 4010 | 0.7794 | 0.0 | 0.7794 | 0.8828 |
| 0.0428 | 8.7028 | 4012 | 0.7832 | 0.0 | 0.7832 | 0.8850 |
| 0.0428 | 8.7072 | 4014 | 0.7903 | 0.0 | 0.7903 | 0.8890 |
| 0.0428 | 8.7115 | 4016 | 0.8001 | 0.0 | 0.8001 | 0.8945 |
| 0.0428 | 8.7158 | 4018 | 0.8167 | 0.0 | 0.8167 | 0.9037 |
| 0.0428 | 8.7202 | 4020 | 0.8402 | -0.4667 | 0.8402 | 0.9166 |
| 0.0428 | 8.7245 | 4022 | 0.8584 | -0.1379 | 0.8584 | 0.9265 |
| 0.0428 | 8.7289 | 4024 | 0.8682 | -0.1379 | 0.8682 | 0.9318 |
| 0.0428 | 8.7332 | 4026 | 0.8754 | -0.1085 | 0.8754 | 0.9356 |
| 0.0428 | 8.7375 | 4028 | 0.8720 | -0.1085 | 0.8720 | 0.9338 |
| 0.0428 | 8.7419 | 4030 | 0.8630 | -0.1379 | 0.8630 | 0.9290 |
| 0.0428 | 8.7462 | 4032 | 0.8437 | -0.4667 | 0.8437 | 0.9186 |
| 0.0428 | 8.7505 | 4034 | 0.8259 | 0.0 | 0.8259 | 0.9088 |
| 0.0428 | 8.7549 | 4036 | 0.8160 | 0.0 | 0.8160 | 0.9033 |
| 0.0428 | 8.7592 | 4038 | 0.8073 | 0.0 | 0.8073 | 0.8985 |
| 0.0428 | 8.7636 | 4040 | 0.8055 | 0.0 | 0.8055 | 0.8975 |
| 0.0428 | 8.7679 | 4042 | 0.8071 | 0.0 | 0.8071 | 0.8984 |
| 0.0428 | 8.7722 | 4044 | 0.8128 | 0.0 | 0.8128 | 0.9015 |
| 0.0428 | 8.7766 | 4046 | 0.8137 | 0.0 | 0.8137 | 0.9020 |
| 0.0428 | 8.7809 | 4048 | 0.8107 | 0.0 | 0.8107 | 0.9004 |
| 0.0428 | 8.7852 | 4050 | 0.8042 | 0.0 | 0.8042 | 0.8968 |
| 0.0428 | 8.7896 | 4052 | 0.8029 | 0.0 | 0.8029 | 0.8961 |
| 0.0428 | 8.7939 | 4054 | 0.8093 | 0.0 | 0.8093 | 0.8996 |
| 0.0428 | 8.7983 | 4056 | 0.8147 | 0.0 | 0.8147 | 0.9026 |
| 0.0428 | 8.8026 | 4058 | 0.8242 | 0.0 | 0.8242 | 0.9078 |
| 0.0428 | 8.8069 | 4060 | 0.8312 | 0.0 | 0.8312 | 0.9117 |
| 0.0428 | 8.8113 | 4062 | 0.8365 | 0.0 | 0.8365 | 0.9146 |
| 0.0428 | 8.8156 | 4064 | 0.8471 | -0.4667 | 0.8471 | 0.9204 |
| 0.0428 | 8.8200 | 4066 | 0.8552 | -0.4667 | 0.8552 | 0.9248 |
| 0.0428 | 8.8243 | 4068 | 0.8641 | -0.3883 | 0.8641 | 0.9296 |
| 0.0428 | 8.8286 | 4070 | 0.8682 | -0.1379 | 0.8682 | 0.9318 |
| 0.0428 | 8.8330 | 4072 | 0.8615 | -0.4667 | 0.8615 | 0.9282 |
| 0.0428 | 8.8373 | 4074 | 0.8527 | -0.4667 | 0.8527 | 0.9234 |
| 0.0428 | 8.8416 | 4076 | 0.8519 | -0.4667 | 0.8519 | 0.9230 |
| 0.0428 | 8.8460 | 4078 | 0.8460 | -0.4667 | 0.8460 | 0.9198 |
| 0.0428 | 8.8503 | 4080 | 0.8369 | -0.4667 | 0.8369 | 0.9148 |
| 0.0428 | 8.8547 | 4082 | 0.8403 | -0.4667 | 0.8403 | 0.9167 |
| 0.0428 | 8.8590 | 4084 | 0.8499 | -0.4667 | 0.8499 | 0.9219 |
| 0.0428 | 8.8633 | 4086 | 0.8668 | -0.1379 | 0.8668 | 0.9310 |
| 0.0428 | 8.8677 | 4088 | 0.8784 | -0.1085 | 0.8784 | 0.9372 |
| 0.0428 | 8.8720 | 4090 | 0.8848 | -0.1085 | 0.8848 | 0.9406 |
| 0.0428 | 8.8764 | 4092 | 0.8801 | -0.1085 | 0.8801 | 0.9381 |
| 0.0428 | 8.8807 | 4094 | 0.8640 | -0.1748 | 0.8640 | 0.9295 |
| 0.0428 | 8.8850 | 4096 | 0.8478 | -0.4667 | 0.8478 | 0.9207 |
| 0.0428 | 8.8894 | 4098 | 0.8454 | -0.4667 | 0.8454 | 0.9195 |
| 0.0428 | 8.8937 | 4100 | 0.8536 | -0.4667 | 0.8536 | 0.9239 |
| 0.0428 | 8.8980 | 4102 | 0.8694 | -0.1748 | 0.8694 | 0.9324 |
| 0.0428 | 8.9024 | 4104 | 0.8855 | -0.1085 | 0.8855 | 0.9410 |
| 0.0428 | 8.9067 | 4106 | 0.8977 | -0.1085 | 0.8977 | 0.9475 |
| 0.0428 | 8.9111 | 4108 | 0.9137 | -0.1085 | 0.9137 | 0.9559 |
| 0.0428 | 8.9154 | 4110 | 0.9337 | -0.1085 | 0.9337 | 0.9663 |
| 0.0428 | 8.9197 | 4112 | 0.9344 | -0.1085 | 0.9344 | 0.9666 |
| 0.0428 | 8.9241 | 4114 | 0.9309 | -0.1085 | 0.9309 | 0.9648 |
| 0.0428 | 8.9284 | 4116 | 0.9199 | -0.1085 | 0.9199 | 0.9591 |
| 0.0428 | 8.9328 | 4118 | 0.9001 | -0.1085 | 0.9001 | 0.9487 |
| 0.0428 | 8.9371 | 4120 | 0.8779 | -0.1379 | 0.8779 | 0.9370 |
| 0.0428 | 8.9414 | 4122 | 0.8708 | -0.1379 | 0.8708 | 0.9332 |
| 0.0428 | 8.9458 | 4124 | 0.8729 | -0.1379 | 0.8729 | 0.9343 |
| 0.0428 | 8.9501 | 4126 | 0.8796 | -0.1379 | 0.8796 | 0.9379 |
| 0.0428 | 8.9544 | 4128 | 0.8862 | -0.1085 | 0.8862 | 0.9414 |
| 0.0428 | 8.9588 | 4130 | 0.8897 | -0.1085 | 0.8897 | 0.9432 |
| 0.0428 | 8.9631 | 4132 | 0.8941 | -0.1085 | 0.8941 | 0.9456 |
| 0.0428 | 8.9675 | 4134 | 0.8969 | -0.1085 | 0.8969 | 0.9471 |
| 0.0428 | 8.9718 | 4136 | 0.8996 | -0.1085 | 0.8996 | 0.9485 |
| 0.0428 | 8.9761 | 4138 | 0.9109 | -0.1085 | 0.9109 | 0.9544 |
| 0.0428 | 8.9805 | 4140 | 0.9136 | -0.1085 | 0.9136 | 0.9558 |
| 0.0428 | 8.9848 | 4142 | 0.9190 | -0.1085 | 0.9190 | 0.9586 |
| 0.0428 | 8.9892 | 4144 | 0.9148 | -0.1085 | 0.9148 | 0.9565 |
| 0.0428 | 8.9935 | 4146 | 0.9022 | -0.1085 | 0.9022 | 0.9498 |
| 0.0428 | 8.9978 | 4148 | 0.8880 | -0.1085 | 0.8880 | 0.9424 |
| 0.0428 | 9.0022 | 4150 | 0.8873 | -0.1085 | 0.8873 | 0.9420 |
| 0.0428 | 9.0065 | 4152 | 0.8944 | -0.1085 | 0.8944 | 0.9457 |
| 0.0428 | 9.0108 | 4154 | 0.8880 | -0.1085 | 0.8880 | 0.9423 |
| 0.0428 | 9.0152 | 4156 | 0.8729 | -0.1379 | 0.8729 | 0.9343 |
| 0.0428 | 9.0195 | 4158 | 0.8622 | -0.1379 | 0.8622 | 0.9285 |
| 0.0428 | 9.0239 | 4160 | 0.8501 | -0.1748 | 0.8501 | 0.9220 |
| 0.0428 | 9.0282 | 4162 | 0.8397 | -0.4667 | 0.8397 | 0.9164 |
| 0.0428 | 9.0325 | 4164 | 0.8391 | -0.4667 | 0.8391 | 0.9160 |
| 0.0428 | 9.0369 | 4166 | 0.8432 | -0.4667 | 0.8432 | 0.9182 |
| 0.0428 | 9.0412 | 4168 | 0.8544 | -0.1379 | 0.8544 | 0.9243 |
| 0.0428 | 9.0456 | 4170 | 0.8664 | -0.1379 | 0.8664 | 0.9308 |
| 0.0428 | 9.0499 | 4172 | 0.8824 | -0.1085 | 0.8824 | 0.9394 |
| 0.0428 | 9.0542 | 4174 | 0.8950 | -0.1085 | 0.8950 | 0.9461 |
| 0.0428 | 9.0586 | 4176 | 0.9008 | -0.1085 | 0.9008 | 0.9491 |
| 0.0428 | 9.0629 | 4178 | 0.9005 | -0.1085 | 0.9005 | 0.9490 |
| 0.0428 | 9.0672 | 4180 | 0.8960 | -0.1085 | 0.8960 | 0.9466 |
| 0.0428 | 9.0716 | 4182 | 0.8950 | -0.1085 | 0.8950 | 0.9460 |
| 0.0428 | 9.0759 | 4184 | 0.8854 | -0.1379 | 0.8854 | 0.9410 |
| 0.0428 | 9.0803 | 4186 | 0.8838 | -0.1379 | 0.8838 | 0.9401 |
| 0.0428 | 9.0846 | 4188 | 0.8830 | -0.1379 | 0.8830 | 0.9397 |
| 0.0428 | 9.0889 | 4190 | 0.8901 | -0.1085 | 0.8901 | 0.9435 |
| 0.0428 | 9.0933 | 4192 | 0.9019 | -0.1085 | 0.9019 | 0.9497 |
| 0.0428 | 9.0976 | 4194 | 0.9153 | -0.1085 | 0.9153 | 0.9567 |
| 0.0428 | 9.1020 | 4196 | 0.9137 | -0.1085 | 0.9137 | 0.9559 |
| 0.0428 | 9.1063 | 4198 | 0.9038 | -0.1085 | 0.9038 | 0.9507 |
| 0.0428 | 9.1106 | 4200 | 0.8870 | -0.1379 | 0.8870 | 0.9418 |
| 0.0428 | 9.1150 | 4202 | 0.8646 | -0.1748 | 0.8646 | 0.9298 |
| 0.0428 | 9.1193 | 4204 | 0.8491 | -0.4667 | 0.8491 | 0.9215 |
| 0.0428 | 9.1236 | 4206 | 0.8344 | 0.0 | 0.8344 | 0.9134 |
| 0.0428 | 9.1280 | 4208 | 0.8296 | 0.0 | 0.8296 | 0.9108 |
| 0.0428 | 9.1323 | 4210 | 0.8316 | 0.0 | 0.8316 | 0.9119 |
| 0.0428 | 9.1367 | 4212 | 0.8408 | -0.4667 | 0.8408 | 0.9170 |
| 0.0428 | 9.1410 | 4214 | 0.8554 | -0.1748 | 0.8554 | 0.9249 |
| 0.0428 | 9.1453 | 4216 | 0.8731 | -0.1379 | 0.8731 | 0.9344 |
| 0.0428 | 9.1497 | 4218 | 0.8842 | -0.1379 | 0.8842 | 0.9403 |
| 0.0428 | 9.1540 | 4220 | 0.8928 | -0.1085 | 0.8928 | 0.9449 |
| 0.0428 | 9.1584 | 4222 | 0.9002 | -0.1085 | 0.9002 | 0.9488 |
| 0.0428 | 9.1627 | 4224 | 0.9061 | -0.1085 | 0.9061 | 0.9519 |
| 0.0428 | 9.1670 | 4226 | 0.9117 | -0.1085 | 0.9117 | 0.9548 |
| 0.0428 | 9.1714 | 4228 | 0.9122 | -0.1085 | 0.9122 | 0.9551 |
| 0.0428 | 9.1757 | 4230 | 0.9066 | -0.1085 | 0.9066 | 0.9521 |
| 0.0428 | 9.1800 | 4232 | 0.8987 | -0.1085 | 0.8987 | 0.9480 |
| 0.0428 | 9.1844 | 4234 | 0.8909 | -0.1085 | 0.8909 | 0.9439 |
| 0.0428 | 9.1887 | 4236 | 0.8868 | -0.1379 | 0.8868 | 0.9417 |
| 0.0428 | 9.1931 | 4238 | 0.8808 | -0.1379 | 0.8808 | 0.9385 |
| 0.0428 | 9.1974 | 4240 | 0.8752 | -0.1379 | 0.8752 | 0.9355 |
| 0.0428 | 9.2017 | 4242 | 0.8691 | -0.1379 | 0.8691 | 0.9323 |
| 0.0428 | 9.2061 | 4244 | 0.8678 | -0.1748 | 0.8678 | 0.9316 |
| 0.0428 | 9.2104 | 4246 | 0.8726 | -0.1379 | 0.8726 | 0.9341 |
| 0.0428 | 9.2148 | 4248 | 0.8724 | -0.1379 | 0.8724 | 0.9340 |
| 0.0428 | 9.2191 | 4250 | 0.8685 | -0.1379 | 0.8685 | 0.9319 |
| 0.0428 | 9.2234 | 4252 | 0.8612 | -0.1748 | 0.8612 | 0.9280 |
| 0.0428 | 9.2278 | 4254 | 0.8547 | -0.4667 | 0.8547 | 0.9245 |
| 0.0428 | 9.2321 | 4256 | 0.8478 | -0.4667 | 0.8478 | 0.9207 |
| 0.0428 | 9.2364 | 4258 | 0.8449 | -0.4667 | 0.8449 | 0.9192 |
| 0.0428 | 9.2408 | 4260 | 0.8439 | -0.4667 | 0.8439 | 0.9186 |
| 0.0428 | 9.2451 | 4262 | 0.8463 | -0.4667 | 0.8463 | 0.9199 |
| 0.0428 | 9.2495 | 4264 | 0.8484 | -0.4667 | 0.8484 | 0.9211 |
| 0.0428 | 9.2538 | 4266 | 0.8569 | -0.1748 | 0.8569 | 0.9257 |
| 0.0428 | 9.2581 | 4268 | 0.8622 | -0.1748 | 0.8622 | 0.9285 |
| 0.0428 | 9.2625 | 4270 | 0.8645 | -0.1379 | 0.8645 | 0.9298 |
| 0.0428 | 9.2668 | 4272 | 0.8713 | -0.1379 | 0.8713 | 0.9335 |
| 0.0428 | 9.2711 | 4274 | 0.8746 | -0.1379 | 0.8746 | 0.9352 |
| 0.0428 | 9.2755 | 4276 | 0.8822 | -0.1085 | 0.8822 | 0.9392 |
| 0.0428 | 9.2798 | 4278 | 0.8926 | -0.1085 | 0.8926 | 0.9448 |
| 0.0428 | 9.2842 | 4280 | 0.9062 | -0.1085 | 0.9062 | 0.9520 |
| 0.0428 | 9.2885 | 4282 | 0.9123 | -0.1085 | 0.9123 | 0.9551 |
| 0.0428 | 9.2928 | 4284 | 0.9190 | -0.1085 | 0.9190 | 0.9586 |
| 0.0428 | 9.2972 | 4286 | 0.9182 | -0.1085 | 0.9182 | 0.9582 |
| 0.0428 | 9.3015 | 4288 | 0.9149 | -0.1085 | 0.9149 | 0.9565 |
| 0.0428 | 9.3059 | 4290 | 0.9149 | -0.1085 | 0.9149 | 0.9565 |
| 0.0428 | 9.3102 | 4292 | 0.9097 | -0.1085 | 0.9097 | 0.9538 |
| 0.0428 | 9.3145 | 4294 | 0.9012 | -0.1085 | 0.9012 | 0.9493 |
| 0.0428 | 9.3189 | 4296 | 0.8884 | -0.1085 | 0.8884 | 0.9426 |
| 0.0428 | 9.3232 | 4298 | 0.8760 | -0.1379 | 0.8760 | 0.9359 |
| 0.0428 | 9.3275 | 4300 | 0.8629 | -0.1748 | 0.8629 | 0.9289 |
| 0.0428 | 9.3319 | 4302 | 0.8524 | -0.1748 | 0.8524 | 0.9232 |
| 0.0428 | 9.3362 | 4304 | 0.8452 | -0.4667 | 0.8452 | 0.9194 |
| 0.0428 | 9.3406 | 4306 | 0.8428 | -0.4667 | 0.8428 | 0.9180 |
| 0.0428 | 9.3449 | 4308 | 0.8410 | -0.4667 | 0.8410 | 0.9170 |
| 0.0428 | 9.3492 | 4310 | 0.8431 | -0.4667 | 0.8431 | 0.9182 |
| 0.0428 | 9.3536 | 4312 | 0.8462 | -0.4667 | 0.8462 | 0.9199 |
| 0.0428 | 9.3579 | 4314 | 0.8508 | -0.1748 | 0.8508 | 0.9224 |
| 0.0428 | 9.3623 | 4316 | 0.8607 | -0.1748 | 0.8607 | 0.9277 |
| 0.0428 | 9.3666 | 4318 | 0.8717 | -0.1748 | 0.8717 | 0.9336 |
| 0.0428 | 9.3709 | 4320 | 0.8788 | -0.1379 | 0.8788 | 0.9374 |
| 0.0428 | 9.3753 | 4322 | 0.8877 | -0.1085 | 0.8877 | 0.9422 |
| 0.0428 | 9.3796 | 4324 | 0.8914 | -0.1085 | 0.8914 | 0.9442 |
| 0.0428 | 9.3839 | 4326 | 0.8980 | -0.1085 | 0.8980 | 0.9476 |
| 0.0428 | 9.3883 | 4328 | 0.9047 | -0.1085 | 0.9047 | 0.9512 |
| 0.0428 | 9.3926 | 4330 | 0.9076 | -0.1085 | 0.9076 | 0.9527 |
| 0.0428 | 9.3970 | 4332 | 0.9026 | -0.1085 | 0.9026 | 0.9500 |
| 0.0428 | 9.4013 | 4334 | 0.8914 | -0.1085 | 0.8914 | 0.9441 |
| 0.0428 | 9.4056 | 4336 | 0.8797 | -0.1085 | 0.8797 | 0.9379 |
| 0.0428 | 9.4100 | 4338 | 0.8727 | -0.1748 | 0.8727 | 0.9342 |
| 0.0428 | 9.4143 | 4340 | 0.8622 | -0.1748 | 0.8622 | 0.9286 |
| 0.0428 | 9.4187 | 4342 | 0.8578 | -0.1748 | 0.8578 | 0.9262 |
| 0.0428 | 9.4230 | 4344 | 0.8567 | -0.1748 | 0.8567 | 0.9256 |
| 0.0428 | 9.4273 | 4346 | 0.8584 | -0.1748 | 0.8584 | 0.9265 |
| 0.0428 | 9.4317 | 4348 | 0.8563 | -0.1748 | 0.8563 | 0.9254 |
| 0.0428 | 9.4360 | 4350 | 0.8552 | -0.1748 | 0.8552 | 0.9248 |
| 0.0428 | 9.4403 | 4352 | 0.8569 | -0.1748 | 0.8569 | 0.9257 |
| 0.0428 | 9.4447 | 4354 | 0.8606 | -0.1748 | 0.8606 | 0.9277 |
| 0.0428 | 9.4490 | 4356 | 0.8648 | -0.1748 | 0.8648 | 0.9299 |
| 0.0428 | 9.4534 | 4358 | 0.8680 | -0.1748 | 0.8680 | 0.9317 |
| 0.0428 | 9.4577 | 4360 | 0.8737 | -0.1379 | 0.8737 | 0.9347 |
| 0.0428 | 9.4620 | 4362 | 0.8787 | -0.1085 | 0.8787 | 0.9374 |
| 0.0428 | 9.4664 | 4364 | 0.8852 | -0.1085 | 0.8852 | 0.9408 |
| 0.0428 | 9.4707 | 4366 | 0.8878 | -0.1085 | 0.8878 | 0.9423 |
| 0.0428 | 9.4751 | 4368 | 0.8901 | -0.1085 | 0.8901 | 0.9435 |
| 0.0428 | 9.4794 | 4370 | 0.8899 | -0.1085 | 0.8899 | 0.9434 |
| 0.0428 | 9.4837 | 4372 | 0.8861 | -0.1085 | 0.8861 | 0.9413 |
| 0.0428 | 9.4881 | 4374 | 0.8844 | -0.1085 | 0.8844 | 0.9404 |
| 0.0428 | 9.4924 | 4376 | 0.8838 | -0.1085 | 0.8838 | 0.9401 |
| 0.0428 | 9.4967 | 4378 | 0.8867 | -0.1085 | 0.8867 | 0.9417 |
| 0.0428 | 9.5011 | 4380 | 0.8877 | -0.1085 | 0.8877 | 0.9422 |
| 0.0428 | 9.5054 | 4382 | 0.8847 | -0.1085 | 0.8847 | 0.9406 |
| 0.0428 | 9.5098 | 4384 | 0.8771 | -0.1085 | 0.8771 | 0.9365 |
| 0.0428 | 9.5141 | 4386 | 0.8698 | -0.1379 | 0.8698 | 0.9326 |
| 0.0428 | 9.5184 | 4388 | 0.8684 | -0.1379 | 0.8684 | 0.9319 |
| 0.0428 | 9.5228 | 4390 | 0.8661 | -0.1379 | 0.8661 | 0.9306 |
| 0.0428 | 9.5271 | 4392 | 0.8627 | -0.1748 | 0.8627 | 0.9288 |
| 0.0428 | 9.5315 | 4394 | 0.8614 | -0.1748 | 0.8614 | 0.9281 |
| 0.0428 | 9.5358 | 4396 | 0.8584 | -0.1748 | 0.8584 | 0.9265 |
| 0.0428 | 9.5401 | 4398 | 0.8602 | -0.1748 | 0.8602 | 0.9274 |
| 0.0428 | 9.5445 | 4400 | 0.8586 | -0.1748 | 0.8586 | 0.9266 |
| 0.0428 | 9.5488 | 4402 | 0.8530 | -0.1748 | 0.8530 | 0.9236 |
| 0.0428 | 9.5531 | 4404 | 0.8446 | -0.4667 | 0.8446 | 0.9190 |
| 0.0428 | 9.5575 | 4406 | 0.8379 | -0.4667 | 0.8379 | 0.9154 |
| 0.0428 | 9.5618 | 4408 | 0.8339 | 0.0 | 0.8339 | 0.9132 |
| 0.0428 | 9.5662 | 4410 | 0.8300 | 0.0 | 0.8300 | 0.9111 |
| 0.0428 | 9.5705 | 4412 | 0.8250 | 0.0 | 0.8250 | 0.9083 |
| 0.0428 | 9.5748 | 4414 | 0.8216 | 0.0 | 0.8216 | 0.9064 |
| 0.0428 | 9.5792 | 4416 | 0.8189 | 0.0 | 0.8189 | 0.9049 |
| 0.0428 | 9.5835 | 4418 | 0.8191 | 0.0 | 0.8191 | 0.9051 |
| 0.0428 | 9.5879 | 4420 | 0.8189 | 0.0 | 0.8189 | 0.9049 |
| 0.0428 | 9.5922 | 4422 | 0.8227 | 0.0 | 0.8227 | 0.9070 |
| 0.0428 | 9.5965 | 4424 | 0.8276 | 0.0 | 0.8276 | 0.9097 |
| 0.0428 | 9.6009 | 4426 | 0.8322 | 0.0 | 0.8322 | 0.9122 |
| 0.0428 | 9.6052 | 4428 | 0.8337 | 0.0 | 0.8337 | 0.9130 |
| 0.0428 | 9.6095 | 4430 | 0.8334 | 0.0 | 0.8334 | 0.9129 |
| 0.0428 | 9.6139 | 4432 | 0.8330 | 0.0 | 0.8330 | 0.9127 |
| 0.0428 | 9.6182 | 4434 | 0.8328 | 0.0 | 0.8328 | 0.9126 |
| 0.0428 | 9.6226 | 4436 | 0.8321 | 0.0 | 0.8321 | 0.9122 |
| 0.0428 | 9.6269 | 4438 | 0.8329 | 0.0 | 0.8329 | 0.9126 |
| 0.0428 | 9.6312 | 4440 | 0.8311 | 0.0 | 0.8311 | 0.9116 |
| 0.0428 | 9.6356 | 4442 | 0.8288 | 0.0 | 0.8288 | 0.9104 |
| 0.0428 | 9.6399 | 4444 | 0.8296 | 0.0 | 0.8296 | 0.9108 |
| 0.0428 | 9.6443 | 4446 | 0.8312 | 0.0 | 0.8312 | 0.9117 |
| 0.0428 | 9.6486 | 4448 | 0.8337 | 0.0 | 0.8337 | 0.9131 |
| 0.0428 | 9.6529 | 4450 | 0.8378 | -0.4667 | 0.8378 | 0.9153 |
| 0.0428 | 9.6573 | 4452 | 0.8437 | -0.4667 | 0.8437 | 0.9185 |
| 0.0428 | 9.6616 | 4454 | 0.8490 | -0.1748 | 0.8490 | 0.9214 |
| 0.0428 | 9.6659 | 4456 | 0.8515 | -0.1748 | 0.8515 | 0.9228 |
| 0.0428 | 9.6703 | 4458 | 0.8525 | -0.1748 | 0.8525 | 0.9233 |
| 0.0428 | 9.6746 | 4460 | 0.8542 | -0.1748 | 0.8542 | 0.9242 |
| 0.0428 | 9.6790 | 4462 | 0.8541 | -0.1748 | 0.8541 | 0.9242 |
| 0.0428 | 9.6833 | 4464 | 0.8552 | -0.1748 | 0.8552 | 0.9248 |
| 0.0428 | 9.6876 | 4466 | 0.8541 | -0.1748 | 0.8541 | 0.9242 |
| 0.0428 | 9.6920 | 4468 | 0.8514 | -0.1748 | 0.8514 | 0.9227 |
| 0.0428 | 9.6963 | 4470 | 0.8500 | -0.1748 | 0.8500 | 0.9219 |
| 0.0428 | 9.7007 | 4472 | 0.8504 | -0.1748 | 0.8504 | 0.9222 |
| 0.0428 | 9.7050 | 4474 | 0.8521 | -0.1748 | 0.8521 | 0.9231 |
| 0.0428 | 9.7093 | 4476 | 0.8533 | -0.1748 | 0.8533 | 0.9238 |
| 0.0428 | 9.7137 | 4478 | 0.8520 | -0.1748 | 0.8520 | 0.9230 |
| 0.0428 | 9.7180 | 4480 | 0.8525 | -0.1748 | 0.8525 | 0.9233 |
| 0.0428 | 9.7223 | 4482 | 0.8513 | -0.1748 | 0.8513 | 0.9226 |
| 0.0428 | 9.7267 | 4484 | 0.8477 | -0.1748 | 0.8477 | 0.9207 |
| 0.0428 | 9.7310 | 4486 | 0.8422 | -0.4667 | 0.8422 | 0.9177 |
| 0.0428 | 9.7354 | 4488 | 0.8362 | -0.4667 | 0.8362 | 0.9144 |
| 0.0428 | 9.7397 | 4490 | 0.8334 | 0.0 | 0.8334 | 0.9129 |
| 0.0428 | 9.7440 | 4492 | 0.8301 | 0.0 | 0.8301 | 0.9111 |
| 0.0428 | 9.7484 | 4494 | 0.8276 | 0.0 | 0.8276 | 0.9097 |
| 0.0428 | 9.7527 | 4496 | 0.8274 | 0.0 | 0.8274 | 0.9096 |
| 0.0428 | 9.7570 | 4498 | 0.8273 | 0.0 | 0.8273 | 0.9095 |
| 0.0392 | 9.7614 | 4500 | 0.8264 | 0.0 | 0.8264 | 0.9091 |
| 0.0392 | 9.7657 | 4502 | 0.8260 | 0.0 | 0.8260 | 0.9088 |
| 0.0392 | 9.7701 | 4504 | 0.8258 | 0.0 | 0.8258 | 0.9087 |
| 0.0392 | 9.7744 | 4506 | 0.8254 | 0.0 | 0.8254 | 0.9085 |
| 0.0392 | 9.7787 | 4508 | 0.8243 | 0.0 | 0.8243 | 0.9079 |
| 0.0392 | 9.7831 | 4510 | 0.8246 | 0.0 | 0.8246 | 0.9081 |
| 0.0392 | 9.7874 | 4512 | 0.8269 | 0.0 | 0.8269 | 0.9093 |
| 0.0392 | 9.7918 | 4514 | 0.8299 | 0.0 | 0.8299 | 0.9110 |
| 0.0392 | 9.7961 | 4516 | 0.8326 | 0.0 | 0.8326 | 0.9125 |
| 0.0392 | 9.8004 | 4518 | 0.8366 | -0.4667 | 0.8366 | 0.9146 |
| 0.0392 | 9.8048 | 4520 | 0.8411 | -0.4667 | 0.8411 | 0.9171 |
| 0.0392 | 9.8091 | 4522 | 0.8458 | -0.1748 | 0.8458 | 0.9196 |
| 0.0392 | 9.8134 | 4524 | 0.8502 | -0.1748 | 0.8502 | 0.9221 |
| 0.0392 | 9.8178 | 4526 | 0.8521 | -0.1748 | 0.8521 | 0.9231 |
| 0.0392 | 9.8221 | 4528 | 0.8535 | -0.1748 | 0.8535 | 0.9239 |
| 0.0392 | 9.8265 | 4530 | 0.8547 | -0.1748 | 0.8547 | 0.9245 |
| 0.0392 | 9.8308 | 4532 | 0.8557 | -0.1748 | 0.8557 | 0.9250 |
| 0.0392 | 9.8351 | 4534 | 0.8558 | -0.1748 | 0.8558 | 0.9251 |
| 0.0392 | 9.8395 | 4536 | 0.8553 | -0.1748 | 0.8553 | 0.9248 |
| 0.0392 | 9.8438 | 4538 | 0.8549 | -0.1748 | 0.8549 | 0.9246 |
| 0.0392 | 9.8482 | 4540 | 0.8546 | -0.1748 | 0.8546 | 0.9244 |
| 0.0392 | 9.8525 | 4542 | 0.8536 | -0.1748 | 0.8536 | 0.9239 |
| 0.0392 | 9.8568 | 4544 | 0.8524 | -0.1748 | 0.8524 | 0.9232 |
| 0.0392 | 9.8612 | 4546 | 0.8520 | -0.1748 | 0.8520 | 0.9230 |
| 0.0392 | 9.8655 | 4548 | 0.8514 | -0.1748 | 0.8514 | 0.9227 |
| 0.0392 | 9.8698 | 4550 | 0.8503 | -0.1748 | 0.8503 | 0.9221 |
| 0.0392 | 9.8742 | 4552 | 0.8484 | -0.1748 | 0.8484 | 0.9211 |
| 0.0392 | 9.8785 | 4554 | 0.8470 | -0.1748 | 0.8470 | 0.9203 |
| 0.0392 | 9.8829 | 4556 | 0.8452 | -0.1748 | 0.8452 | 0.9194 |
| 0.0392 | 9.8872 | 4558 | 0.8434 | -0.4667 | 0.8434 | 0.9184 |
| 0.0392 | 9.8915 | 4560 | 0.8427 | -0.4667 | 0.8427 | 0.9180 |
| 0.0392 | 9.8959 | 4562 | 0.8420 | -0.4667 | 0.8420 | 0.9176 |
| 0.0392 | 9.9002 | 4564 | 0.8414 | -0.4667 | 0.8414 | 0.9173 |
| 0.0392 | 9.9046 | 4566 | 0.8409 | -0.4667 | 0.8409 | 0.9170 |
| 0.0392 | 9.9089 | 4568 | 0.8400 | -0.4667 | 0.8400 | 0.9165 |
| 0.0392 | 9.9132 | 4570 | 0.8395 | -0.4667 | 0.8395 | 0.9162 |
| 0.0392 | 9.9176 | 4572 | 0.8399 | -0.4667 | 0.8399 | 0.9164 |
| 0.0392 | 9.9219 | 4574 | 0.8407 | -0.4667 | 0.8407 | 0.9169 |
| 0.0392 | 9.9262 | 4576 | 0.8420 | -0.4667 | 0.8420 | 0.9176 |
| 0.0392 | 9.9306 | 4578 | 0.8430 | -0.4667 | 0.8430 | 0.9181 |
| 0.0392 | 9.9349 | 4580 | 0.8439 | -0.4667 | 0.8439 | 0.9187 |
| 0.0392 | 9.9393 | 4582 | 0.8445 | -0.1748 | 0.8445 | 0.9190 |
| 0.0392 | 9.9436 | 4584 | 0.8448 | -0.1748 | 0.8448 | 0.9191 |
| 0.0392 | 9.9479 | 4586 | 0.8448 | -0.1748 | 0.8448 | 0.9191 |
| 0.0392 | 9.9523 | 4588 | 0.8450 | -0.1748 | 0.8450 | 0.9192 |
| 0.0392 | 9.9566 | 4590 | 0.8450 | -0.1748 | 0.8450 | 0.9192 |
| 0.0392 | 9.9610 | 4592 | 0.8454 | -0.1748 | 0.8454 | 0.9195 |
| 0.0392 | 9.9653 | 4594 | 0.8461 | -0.1748 | 0.8461 | 0.9198 |
| 0.0392 | 9.9696 | 4596 | 0.8465 | -0.1748 | 0.8465 | 0.9200 |
| 0.0392 | 9.9740 | 4598 | 0.8466 | -0.1748 | 0.8466 | 0.9201 |
| 0.0392 | 9.9783 | 4600 | 0.8467 | -0.1748 | 0.8467 | 0.9202 |
| 0.0392 | 9.9826 | 4602 | 0.8467 | -0.1748 | 0.8467 | 0.9202 |
| 0.0392 | 9.9870 | 4604 | 0.8467 | -0.1748 | 0.8467 | 0.9202 |
| 0.0392 | 9.9913 | 4606 | 0.8467 | -0.1748 | 0.8467 | 0.9201 |
| 0.0392 | 9.9957 | 4608 | 0.8466 | -0.1748 | 0.8466 | 0.9201 |
| 0.0392 | 10.0 | 4610 | 0.8466 | -0.1748 | 0.8466 | 0.9201 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
danelcsb/vitpose-base-coco-aic-mpii
|
danelcsb
| 2024-11-20T13:39:34Z | 42 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vitpose",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T13:38:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaziyarPanahi/Mistral-Large-Instruct-2411-GGUF
|
MaziyarPanahi
| 2024-11-20T13:39:30Z | 707,932 | 1 | null |
[
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:mistralai/Mistral-Large-Instruct-2411",
"base_model:quantized:mistralai/Mistral-Large-Instruct-2411",
"region:us",
"conversational"
] |
text-generation
| 2024-11-18T22:30:28Z |
---
base_model: mistralai/Mistral-Large-Instruct-2411
inference: false
model_creator: mistralai
model_name: Mistral-Large-Instruct-2411-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Mistral-Large-Instruct-2411-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Large-Instruct-2411-GGUF)
- Model creator: [mistralai](https://huggingface.co/mistralai)
- Original model: [mistralai/Mistral-Large-Instruct-2411](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411)
## Description
[MaziyarPanahi/Mistral-Large-Instruct-2411-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Large-Instruct-2411-GGUF) contains GGUF format model files for [mistralai/Mistral-Large-Instruct-2411](https://huggingface.co/mistralai/Mistral-Large-Instruct-2411).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
cristianmanosalva/NER-finetuning-BBU-CM-V1
|
cristianmanosalva
| 2024-11-20T13:38:12Z | 120 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:biobert_json",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-20T13:37:30Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
datasets:
- biobert_json
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-finetuning-BBU-CM-V1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: biobert_json
type: biobert_json
config: Biobert_json
split: validation
args: Biobert_json
metrics:
- name: Precision
type: precision
value: 0.9299403078856425
- name: Recall
type: recall
value: 0.9512587038028923
- name: F1
type: f1
value: 0.9404787121372591
- name: Accuracy
type: accuracy
value: 0.9771331458040319
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetuning-BBU-CM-V1
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the biobert_json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1111
- Precision: 0.9299
- Recall: 0.9513
- F1: 0.9405
- Accuracy: 0.9771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4331 | 1.0 | 612 | 0.1091 | 0.8914 | 0.9413 | 0.9156 | 0.9713 |
| 0.1313 | 2.0 | 1224 | 0.1077 | 0.8941 | 0.9494 | 0.9209 | 0.9718 |
| 0.0869 | 3.0 | 1836 | 0.0888 | 0.9308 | 0.9555 | 0.9430 | 0.9786 |
| 0.0726 | 4.0 | 2448 | 0.0957 | 0.9253 | 0.9578 | 0.9413 | 0.9767 |
| 0.0507 | 5.0 | 3060 | 0.0936 | 0.9287 | 0.9554 | 0.9419 | 0.9770 |
| 0.0451 | 6.0 | 3672 | 0.1051 | 0.9276 | 0.9538 | 0.9405 | 0.9762 |
| 0.0383 | 7.0 | 4284 | 0.1038 | 0.9218 | 0.9576 | 0.9394 | 0.9760 |
| 0.036 | 8.0 | 4896 | 0.1094 | 0.9245 | 0.9533 | 0.9387 | 0.9765 |
| 0.0284 | 9.0 | 5508 | 0.1082 | 0.9296 | 0.9516 | 0.9404 | 0.9768 |
| 0.0256 | 10.0 | 6120 | 0.1111 | 0.9299 | 0.9513 | 0.9405 | 0.9771 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
cvmil/deit-base-patch16-224_rice-disease-02
|
cvmil
| 2024-11-20T13:37:48Z | 203 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-20T04:13:46Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deit-base-patch16-224_rice-disease-02_112024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224_rice-disease-02_112024
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3063
- Accuracy: 0.9148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 1.8862 | 1.0 | 212 | 0.7092 | 1.2580 |
| 0.8631 | 2.0 | 424 | 0.8190 | 0.6676 |
| 0.5449 | 3.0 | 636 | 0.8523 | 0.5124 |
| 0.4396 | 4.0 | 848 | 0.8736 | 0.4459 |
| 0.3852 | 5.0 | 1060 | 0.8816 | 0.4026 |
| 0.3488 | 6.0 | 1272 | 0.8902 | 0.3763 |
| 0.324 | 7.0 | 1484 | 0.8942 | 0.3588 |
| 0.3072 | 8.0 | 1696 | 0.9062 | 0.3420 |
| 0.2928 | 9.0 | 1908 | 0.9055 | 0.3330 |
| 0.2826 | 10.0 | 2120 | 0.9082 | 0.3231 |
| 0.2732 | 11.0 | 2332 | 0.9115 | 0.3172 |
| 0.2669 | 12.0 | 2544 | 0.3119 | 0.9128 |
| 0.2619 | 13.0 | 2756 | 0.3086 | 0.9155 |
| 0.258 | 14.0 | 2968 | 0.3068 | 0.9155 |
| 0.2566 | 15.0 | 3180 | 0.3063 | 0.9148 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
letuandat/tts-nnng-2411-v2
|
letuandat
| 2024-11-20T13:36:45Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-08T08:10:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hi-Q/krx_gemma_9B_it_1120_CT
|
Hi-Q
| 2024-11-20T13:35:28Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"krx",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T12:57:37Z |
---
library_name: transformers
tags:
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lweng35/RoBERTa-best
|
lweng35
| 2024-11-20T13:34:25Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-19T22:01:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ijohn07/DeepSeek-V2-Lite-Q8_0-GGUF
|
ijohn07
| 2024-11-20T13:31:54Z | 11 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-V2-Lite",
"base_model:quantized:deepseek-ai/DeepSeek-V2-Lite",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-19T16:54:21Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL
tags:
- llama-cpp
- gguf-my-repo
base_model: deepseek-ai/DeepSeek-V2-Lite
---
# ijohn07/DeepSeek-V2-Lite-Q8_0-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-V2-Lite`](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ijohn07/DeepSeek-V2-Lite-Q8_0-GGUF --hf-file deepseek-v2-lite-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ijohn07/DeepSeek-V2-Lite-Q8_0-GGUF --hf-file deepseek-v2-lite-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ijohn07/DeepSeek-V2-Lite-Q8_0-GGUF --hf-file deepseek-v2-lite-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ijohn07/DeepSeek-V2-Lite-Q8_0-GGUF --hf-file deepseek-v2-lite-q8_0.gguf -c 2048
```
|
HugoFresh/flux_Maximilian
|
HugoFresh
| 2024-11-20T13:28:31Z | 6 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-20T12:42:57Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MaximilianIT
---
# Flux_Maximilian
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MaximilianIT` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('HugoFresh/flux_Maximilian', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Nama2024/ast-finetuned-audioset-10-10-0.4593-finetuned-gunshot
|
Nama2024
| 2024-11-20T13:25:16Z | 213 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-11-20T12:56:04Z |
---
library_name: transformers
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gunshot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gunshot
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9969
- Accuracy: 0.7412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.398 | 1.0 | 341 | 1.3407 | 0.6059 |
| 1.0815 | 2.0 | 682 | 0.7835 | 0.7118 |
| 0.1502 | 3.0 | 1023 | 0.9741 | 0.6706 |
| 0.1563 | 4.0 | 1364 | 0.7744 | 0.7765 |
| 0.3851 | 5.0 | 1705 | 0.6325 | 0.8176 |
| 0.3527 | 6.0 | 2046 | 0.6877 | 0.7765 |
| 0.0537 | 7.0 | 2387 | 0.5326 | 0.7353 |
| 0.154 | 8.0 | 2728 | 0.7756 | 0.7824 |
| 0.0984 | 9.0 | 3069 | 0.9495 | 0.7412 |
| 0.3237 | 10.0 | 3410 | 0.9969 | 0.7412 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.1.dev0
- Tokenizers 0.20.3
|
anime-sssh/model
|
anime-sssh
| 2024-11-20T13:23:45Z | 64 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:quantized:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-18T11:03:25Z |
---
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** anime-sssh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/SmolLM2-1.7B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Clark88/Tinyllama-QNA-thesmartest-1.0
|
Clark88
| 2024-11-20T13:22:16Z | 225 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T13:20:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Conspirators/krx_qwen2.5-7b-it_nv6
|
Conspirators
| 2024-11-20T13:22:10Z | 7 | 0 | null |
[
"pytorch",
"qwen2",
"unsloth",
"krx",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-11-20T12:23:51Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
tags:
- unsloth
- krx
- trl
- qwen2
- sft
---
|
JacobLinCool/rvc-jacob-2
|
JacobLinCool
| 2024-11-20T13:17:05Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2024-11-19T20:53:21Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
nteku1/GPT2_point5_2
|
nteku1
| 2024-11-20T13:15:37Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"gpt2",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2024-11-20T13:15:18Z |
---
license: mit
base_model: openai-community/gpt2
tags:
- generated_from_trainer
model-index:
- name: GPT2_point5_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2_point5_2
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.15.2
|
ProdocAI/Intent-classification-1b-GGUF-v1
|
ProdocAI
| 2024-11-20T13:14:36Z | 15 | 2 | null |
[
"pytorch",
"gguf",
"llama",
"classification",
"intent",
"healthcare",
"conversational",
"LLM",
"multilingual",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T12:47:26Z |
---
license: mit
language: multilingual
base_model: unsloth/Llama-3.2-1B-Instruct
quantized_by: prodoc.ai
tags:
- classification
- intent
- healthcare
- conversational
- LLM
widget:
- text: "I want to book an appointment for my child."
- text: "Can you help me with lab test results?"
---
# ProdocAI/HealthIntent-Classifier-1b-GGUF-v1
`HealthIntent-Classifier-1b-GGUF-v1` is a fine-tuned version of the `Llama3.2-1B` model, trained to classify healthcare-related queries into predefined intents. This model is designed to streamline user interactions by identifying their specific healthcare-related needs efficiently.
# Ollama Integration
Leverage seamless integration with Ollama, where the model is fully hosted and ready to run. Simply execute the command `ollama run Prodoc/intent-classification-1b` to start utilizing the model's intent classification capabilities. Transform your healthcare conversational systems with precision and ease.
## Model Details
- **Model Name**: `Intent-classification-1b-GGUF-v1`
- **Base Model**: `Llama3.2-1B`
- **Number of Parameters**: 1 Billion
- **Dataset**: Custom dataset of healthcare-related conversation
- **Training Data Statistics**:
- **Languages**: Includes `en`, `mr`, `te`, `hi`, `bn`, among others (detailed in Language Map section)
## Model Objective
The model identifies the intent behind healthcare-related queries and classifies them into one of the predefined categories. This allows for better routing and handling of user requests in healthcare systems.
### Supported Intents
1. **Appointment Booking**
2. **Surgery Enquiry**
3. **Emergency Assistance**
4. **Lab Test Results Inquiry**
5. **Symptom Consultation**
6. **Hospital Services Enquiry**
7. **Job or Internship Enquiry**
8. **Complaint or Feedback**
9. **Health Check-Up Packages**
10. **Health Insurance Enquiry**
11. **Irrelevant Query**
## Dataset Overview
The dataset includes 3,000 unique conversations mapped to the above intents. Conversations are provided in multiple languages to support diverse user bases.
### Example Input Format
Input to the model should be provided in the following format:
```plaintext
"Below is the conversation between the bot and user
user: Hello
bot: Hello! I am Aisha, a Health-Care Executive at Madhavbaug. How can I assist you today with your health concern?
user: Is my appointment confirmed for tomorrow
bot: I can help you with that. Can you please provide me with your name and contact number so I can check the appointment status for you?
user: ๐
bot: I can help you check the status of your appointment. Please provide me with your name and contact number.
user: Krishnan Iyer
9323195487
bot: Current location, city name, or postal code is required to book an appointment, please provide your location details so I can assist you further
user: Booking is already done for tomorrow 10 am
bot: It seems that your message does not include information about your location. Could you please provide your current city name or postal code? This will help me assist you better in booking an appointment.
user: Thane
bot: Oh great! We have found 20 expert doctors in your area. Could you please let us know your exact location in Thane?
user: Majiwada Thane West
bot: Clinic list Sent
user: โ
Your appointment has been successfully booked!
*Appointment Details:*
*Clinic Name:*
Kolshet Road (Thane W )
*Date & Time:*
19-Oct-2024 10:00
*Clinic Address:*
Shop no.102, First floor, Krishna towers, above Sanman hotel kolshet road, kapurbawdi,Thane West, Thane, MAHARASHTRA - 400607
Thank you for booking appointment with us!
We shall call you shortly to confirm your appointment.
bot: Your appointment has been successfully booked at Kolshet Road (Thane W) clinic on 19-Oct-2024 at 10:00. The clinic address is Shop no.102, First floor, Krishna towers, above Sanman hotel kolshet road, kapurbawdi, Thane West, Thane, MAHARASHTRA - 400607. Thank you for booking the appointment!
user: Hope it is confirmed
bot: Yes, your appointment at Kolshet Road (Thane W) clinic on 19-Oct-2024 at 10:00 is confirmed. Thank you for booking the appointment!
user: Thanks"
|
cristianmanosalva/NER-finetuning-BETO-CM-V3
|
cristianmanosalva
| 2024-11-20T13:13:13Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:biobert_json",
"base_model:NazaGara/NER-fine-tuned-BETO",
"base_model:finetune:NazaGara/NER-fine-tuned-BETO",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-20T10:13:02Z |
---
library_name: transformers
license: cc-by-4.0
base_model: NazaGara/NER-fine-tuned-BETO
tags:
- generated_from_trainer
datasets:
- biobert_json
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-finetuning-BETO-CM-V3
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: biobert_json
type: biobert_json
config: Biobert_json
split: validation
args: Biobert_json
metrics:
- name: Precision
type: precision
value: 0.9332171260485892
- name: Recall
type: recall
value: 0.9462056776759086
- name: F1
type: f1
value: 0.9396665204036859
- name: Accuracy
type: accuracy
value: 0.9769126559714795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetuning-BETO-CM-V3
This model is a fine-tuned version of [NazaGara/NER-fine-tuned-BETO](https://huggingface.co/NazaGara/NER-fine-tuned-BETO) on the biobert_json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1234
- Precision: 0.9332
- Recall: 0.9462
- F1: 0.9397
- Accuracy: 0.9769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3448 | 1.0 | 612 | 0.1106 | 0.9187 | 0.9255 | 0.9221 | 0.9719 |
| 0.1036 | 2.0 | 1224 | 0.0990 | 0.9202 | 0.9507 | 0.9352 | 0.9763 |
| 0.073 | 3.0 | 1836 | 0.0982 | 0.9356 | 0.9493 | 0.9424 | 0.9783 |
| 0.057 | 4.0 | 2448 | 0.1070 | 0.9304 | 0.9493 | 0.9397 | 0.9771 |
| 0.0405 | 5.0 | 3060 | 0.1034 | 0.9353 | 0.9486 | 0.9419 | 0.9783 |
| 0.0361 | 6.0 | 3672 | 0.1081 | 0.9280 | 0.9474 | 0.9376 | 0.9767 |
| 0.0287 | 7.0 | 4284 | 0.1106 | 0.9309 | 0.9490 | 0.9398 | 0.9777 |
| 0.0284 | 8.0 | 4896 | 0.1182 | 0.9288 | 0.9463 | 0.9375 | 0.9768 |
| 0.0212 | 9.0 | 5508 | 0.1195 | 0.9340 | 0.9464 | 0.9402 | 0.9774 |
| 0.0191 | 10.0 | 6120 | 0.1234 | 0.9332 | 0.9462 | 0.9397 | 0.9769 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/Swallow-MoE-4x7B-lisa-GGUF
|
mradermacher
| 2024-11-20T13:12:44Z | 92 | 1 |
transformers
|
[
"transformers",
"gguf",
"merge",
"moe",
"lisa",
"ja",
"dataset:kunishou/amenokaku-code-instruct",
"dataset:llm-jp/oasst1-21k-en",
"dataset:hieunguyenminh/roleplay",
"dataset:meta-math/MetaMathQA",
"dataset:kunishou/jp-effective-instructions",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-19T21:23:42Z |
---
base_model: Aratako/Swallow-MoE-4x7B-lisa
datasets:
- kunishou/amenokaku-code-instruct
- llm-jp/oasst1-21k-en
- hieunguyenminh/roleplay
- meta-math/MetaMathQA
- kunishou/jp-effective-instructions
language:
- ja
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- merge
- moe
- lisa
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Aratako/Swallow-MoE-4x7B-lisa
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q2_K.gguf) | Q2_K | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q3_K_S.gguf) | Q3_K_S | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q3_K_M.gguf) | Q3_K_M | 9.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q3_K_L.gguf) | Q3_K_L | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.IQ4_XS.gguf) | IQ4_XS | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q4_K_S.gguf) | Q4_K_S | 11.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q4_K_M.gguf) | Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q5_K_S.gguf) | Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q5_K_M.gguf) | Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q6_K.gguf) | Q6_K | 16.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MoE-4x7B-lisa-GGUF/resolve/main/Swallow-MoE-4x7B-lisa.Q8_0.gguf) | Q8_0 | 21.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gdshaji/gd-ms-nem-5k-v1
|
gdshaji
| 2024-11-20T13:11:04Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T13:07:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
danelcsb/vitpose-base
|
danelcsb
| 2024-11-20T13:10:45Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vitpose",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T13:09:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/MN-Chunky-Lotus-12B-Q8_0-GGUF
|
Triangle104
| 2024-11-20T13:08:20Z | 6 | 1 |
transformers
|
[
"transformers",
"gguf",
"storywriting",
"text adventure",
"creative",
"story",
"writing",
"fiction",
"roleplaying",
"rp",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:FallenMerick/MN-Chunky-Lotus-12B",
"base_model:quantized:FallenMerick/MN-Chunky-Lotus-12B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T13:06:48Z |
---
license: cc-by-4.0
language:
- en
base_model: FallenMerick/MN-Chunky-Lotus-12B
library_name: transformers
tags:
- storywriting
- text adventure
- creative
- story
- writing
- fiction
- roleplaying
- rp
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/MN-Chunky-Lotus-12B-Q8_0-GGUF
This model was converted to GGUF format from [`FallenMerick/MN-Chunky-Lotus-12B`](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) for more details on the model.
---
Model details:
-
I had originally planned to use this model for future/further merges, but decided to go ahead and release it since it scored rather high on my local EQ Bench testing (79.58 w/ 100% parsed @ 8-bit).
Bear in mind that most models tend to score a bit higher on my own local tests as compared to their posted scores. Still, its the highest score I've personally seen from all the models I've tested.
Its a decent model, with great emotional intelligence and acceptable adherence to various character personalities. It does a good job at roleplaying despite being a bit bland at times.
Overall, I like the way it writes, but it has a few formatting issues that show up from time to time, and it has an uncommon tendency to paste walls of character feelings/intentions at the end of some outputs without any prompting. This is something I hope to correct with future iterations.
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the TIES merge method.
Models Merged
-
The following models were included in the merge:
Epiculous/Violet_Twilight-v0.2
nbeerbower/mistral-nemo-gutenberg-12B-v4
flammenai/Mahou-1.5-mistral-nemo-12B
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: Epiculous/Violet_Twilight-v0.2
parameters:
weight: 1.0
density: 1.0
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
weight: 1.0
density: 0.54
- model: flammenai/Mahou-1.5-mistral-nemo-12B
parameters:
weight: 1.0
density: 0.26
merge_method: ties
base_model: TheDrummer/Rocinante-12B-v1.1
parameters:
normalize: true
dtype: bfloat16
The idea behind this recipe was to take the long-form writing capabilities of Gutenberg, curtail it a bit with the very short output formatting of Mahou, and use Violet Twilight as an extremely solid roleplaying foundation underneath.
Rocinante is used as the base model in this merge in order to really target the delta weights from Gutenberg, since those seemed to have the highest impact on the resulting EQ of the model.
Special shoutout to @matchaaaaa for helping with testing, and for all the great model recommendations. Also, for just being an all around great person who's really inspired and motivated me to continue merging and working on models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q8_0-GGUF --hf-file mn-chunky-lotus-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q8_0-GGUF --hf-file mn-chunky-lotus-12b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q8_0-GGUF --hf-file mn-chunky-lotus-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q8_0-GGUF --hf-file mn-chunky-lotus-12b-q8_0.gguf -c 2048
```
|
alibaba-pai/EasyAnimateV2-XL-2-512x512
|
alibaba-pai
| 2024-11-20T12:56:09Z | 2 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"text-generation-inference",
"en",
"arxiv:2405.18991",
"license:apache-2.0",
"diffusers:PixArtAlphaPipeline",
"region:us"
] |
text-to-image
| 2024-06-03T09:40:10Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation-inference
---
# ๐ท EasyAnimate | An End-to-End Solution for High-Resolution and Long Video Generation
๐ EasyAnimate is an end-to-end solution for generating high-resolution and long videos. We can train transformer based diffusion generators, train VAEs for processing long videos, and preprocess metadata.
๐ Based on Sora like structure and DIT, we use transformer as a diffuser for video generation. We built easyanimate based on motion module, u-vit and slice-vae. In the future, we will try more training programs to improve the effect.
๐ Welcome!
The model trained with size 512\*512\*144 for [EasyAnimate](https://github.com/aigc-apps/EasyAnimate). We give a simple usage here, for more details, you can refer to [EasyAnimate](https://github.com/aigc-apps/EasyAnimate).
# Table of Contents
- [Result Gallery](#result-gallery)
- [How to use](#how-to-use)
- [Model zoo](#model-zoo)
- [Algorithm Detailed](#algorithm-detailed)
- [TODO List](#todo-list)
- [Contact Us](#contact-us)
- [Reference](#reference)
- [License](#license)
# Result Gallery
These are our generated results [GALLERY](scripts/Result_Gallery.md):
<video controls src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v2/easyanimate.mp4" title="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v2/easyanimate.mov"></video>
Our UI interface is as follows:

# How to use
```
# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git
# enter EasyAnimate's dir
cd EasyAnimate
# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model
cd models/Diffusion_Transformer/
git lfs install
git clone https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-512x512
cd ../../
```
# Model zoo
EasyAnimateV2:
| Name | Type | Storage Space | Url | Hugging Face | Model Scope | Description |
|--|--|--|--|--|--|--|
| EasyAnimateV2-XL-2-512x512.tar | EasyAnimateV2 | 16.2GB | - | [๐คLink](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-512x512)| [๐Link](https://modelscope.cn/models/PAI/EasyAnimateV2-XL-2-512x512)| EasyAnimateV2 official weights for 512x512 resolution. Training with 144 frames and fps 24 |
| EasyAnimateV2-XL-2-768x768.tar | EasyAnimateV2 | 16.2GB | - | [๐คLink](https://huggingface.co/alibaba-pai/EasyAnimateV2-XL-2-768x768) | [๐Link](https://modelscope.cn/models/PAI/EasyAnimateV2-XL-2-768x768)| EasyAnimateV2 official weights for 768x768 resolution. Training with 144 frames and fps 24 |
| easyanimatev2_minimalism_lora.safetensors | Lora of Pixart | 485.1MB | [Download](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/Personalized_Model/easyanimatev2_minimalism_lora.safetensors)| - | - | A lora training with a specifial type images. Images can be downloaded from [Url](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v2/Minimalism.zip). |
# Algorithm Detailed
### 1. Data Preprocessing
**Video Cut**
For long video cut, EasyAnimate utilizes PySceneDetect to identify scene changes within the video and performs scene cutting based on certain threshold values to ensure consistency in the themes of the video segments. After cutting, we only keep segments with lengths ranging from 3 to 10 seconds for model training.
**Video Cleaning and Description**
Following SVD's data preparation process, EasyAnimate provides a simple yet effective data processing pipeline for high-quality data filtering and labeling. It also supports distributed processing to accelerate the speed of data preprocessing. The overall process is as follows:
- Duration filtering: Analyze the basic information of the video to filter out low-quality videos that are short in duration or low in resolution.
- Aesthetic filtering: Filter out videos with poor content (blurry, dim, etc.) by calculating the average aesthetic score of uniformly distributed 4 frames.
- Text filtering: Use easyocr to calculate the text proportion of middle frames to filter out videos with a large proportion of text.
- Motion filtering: Calculate interframe optical flow differences to filter out videos that move too slowly or too quickly.
- Text description: Recaption video frames using videochat2 and vila. PAI is also developing a higher quality video recaption model, which will be released for use as soon as possible.
### 2. Model Architecture
We have adopted [PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha) as the base model and modified the VAE and DiT model structures on this basis to better support video generation. The overall structure of EasyAnimate is as follows:
The diagram below outlines the pipeline of EasyAnimate. It includes the Text Encoder, Video VAE (video encoder and decoder), and Diffusion Transformer (DiT). The T5 Encoder is used as the text encoder. Other components are detailed in the sections below.
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/pipeline_v2.jpg" alt="ui" style="zoom:50%;" />
To introduce feature information along the temporal axis, EasyAnimate incorporates the Motion Module to achieve the expansion from 2D images to 3D videos. For better generation effects, it jointly finetunes the Backbone together with the Motion Module, thereby achieving image generation and video generation within a single Pipeline.
Additionally, referencing U-ViT, it introduces a skip connection structure into EasyAnimate to further optimize deeper features by incorporating shallow features. A fully connected layer is also zero-initialized for each skip connection structure, allowing it to be applied as a plug-in module to previously trained and well-performing DiTs.
Moreover, it proposes Slice VAE, which addresses the memory difficulties encountered by MagViT when dealing with long and large videos, while also achieving greater compression in the temporal dimension during video encoding and decoding stages compared to MagViT.
For more details, please refer to [arxiv](https://arxiv.org/abs/2405.18991).
# TODO List
- Support model with larger resolution.
- Support video inpaint model.
# Contact Us
1. Use Dingding to search group 77450006752 or Scan to join
2. You need to scan the image to join the WeChat group or if it is expired, add this student as a friend first to invite you.
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/dd.png" alt="ding group" width="30%"/>
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/wechat.jpg" alt="Wechat group" width="30%"/>
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/person.jpg" alt="Person" width="30%"/>
# Reference
- magvit: https://github.com/google-research/magvit
- PixArt: https://github.com/PixArt-alpha/PixArt-alpha
- Open-Sora-Plan: https://github.com/PKU-YuanGroup/Open-Sora-Plan
- Open-Sora: https://github.com/hpcaitech/Open-Sora
- Animatediff: https://github.com/guoyww/AnimateDiff
# License
This project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).
|
alibaba-pai/EasyAnimateV3-XL-2-InP-960x960
|
alibaba-pai
| 2024-11-20T12:54:53Z | 11 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"text-generation-inference",
"en",
"arxiv:2405.18991",
"license:apache-2.0",
"diffusers:PixArtAlphaPipeline",
"region:us"
] |
text-to-image
| 2024-07-05T07:50:25Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation-inference
---
# ๐ท EasyAnimate | An End-to-End Solution for High-Resolution and Long Video Generation
๐ EasyAnimate is an end-to-end solution for generating high-resolution and long videos. We can train transformer based diffusion generators, train VAEs for processing long videos, and preprocess metadata.
๐ Based on Sora like structure and DIT, we use transformer as a diffuser for video generation. We built easyanimate based on motion module, u-vit and slice-vae. In the future, we will try more training programs to improve the effect.
๐ Welcome!
The model trained with size 960\*960\*144 for [EasyAnimate](https://github.com/aigc-apps/EasyAnimate) for both text-to-video and image-to-video. We give a simple usage here, for more details, you can refer to [EasyAnimate](https://github.com/aigc-apps/EasyAnimate).
# Table of Contents
- [Result Gallery](#result-gallery)
- [How to use](#how-to-use)
- [Model zoo](#model-zoo)
- [Algorithm Detailed](#algorithm-detailed)
- [TODO List](#todo-list)
- [Contact Us](#contact-us)
- [Reference](#reference)
- [License](#license)
# Result Gallery
These are our generated results:
<video controls src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/v3/EasyAnimate-v3-DemoShow.mp4" title="movie"></video>
Our UI interface is as follows:

# How to use
```
# clone code
git clone https://github.com/aigc-apps/EasyAnimate.git
# enter EasyAnimate's dir
cd EasyAnimate
# download weights
mkdir models/Diffusion_Transformer
mkdir models/Motion_Module
mkdir models/Personalized_Model
cd models/Diffusion_Transformer/
git lfs install
git clone https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-960x960
cd ../../
```
# Model zoo
EasyAnimateV3:
| Name | Type | Storage Space | Hugging Face | Model Scope | Description |
|--|--|--|--|--|--|
| EasyAnimateV3-XL-2-InP-512x512.tar | EasyAnimateV3 | 18.2GB | [๐คLink](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-512x512)| [๐Link](https://modelscope.cn/models/PAI/EasyAnimateV3-XL-2-InP-512x512) | EasyAnimateV3 official weights for 512x512 text and image to video resolution. Training with 144 frames and fps 24 |
| EasyAnimateV3-XL-2-InP-768x768.tar | EasyAnimateV3 | 18.2GB | [๐คLink](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-768x768) | [๐Link](https://modelscope.cn/models/PAI/EasyAnimateV3-XL-2-InP-768x768) | EasyAnimateV3 official weights for 768x768 text and image to video resolution. Training with 144 frames and fps 24 |
| EasyAnimateV3-XL-2-InP-960x960.tar | EasyAnimateV3 | 18.2GB | [๐คLink](https://huggingface.co/alibaba-pai/EasyAnimateV3-XL-2-InP-960x960) | [๐Link](https://modelscope.cn/models/PAI/EasyAnimateV3-XL-2-InP-960x960) | EasyAnimateV3 official weights for 960x960 text and image to video resolution. Training with 144 frames and fps 24 |
# Algorithm Detailed
### 1. Data Preprocessing
**Video Cut**
For long video cut, EasyAnimate utilizes PySceneDetect to identify scene changes within the video and performs scene cutting based on certain threshold values to ensure consistency in the themes of the video segments. After cutting, we only keep segments with lengths ranging from 3 to 10 seconds for model training.
**Video Cleaning and Description**
Following SVD's data preparation process, EasyAnimate provides a simple yet effective data processing pipeline for high-quality data filtering and labeling. It also supports distributed processing to accelerate the speed of data preprocessing. The overall process is as follows:
- Duration filtering: Analyze the basic information of the video to filter out low-quality videos that are short in duration or low in resolution.
- Aesthetic filtering: Filter out videos with poor content (blurry, dim, etc.) by calculating the average aesthetic score of uniformly distributed 4 frames.
- Text filtering: Use easyocr to calculate the text proportion of middle frames to filter out videos with a large proportion of text.
- Motion filtering: Calculate interframe optical flow differences to filter out videos that move too slowly or too quickly.
- Text description: Recaption video frames using videochat2 and vila. PAI is also developing a higher quality video recaption model, which will be released for use as soon as possible.
### 2. Model Architecture
We have adopted [PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha) as the base model and modified the VAE and DiT model structures on this basis to better support video generation. The overall structure of EasyAnimate is as follows:
The diagram below outlines the pipeline of EasyAnimate. It includes the Text Encoder, Video VAE (video encoder and decoder), and Diffusion Transformer (DiT). The T5 Encoder is used as the text encoder. Other components are detailed in the sections below.
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/pipeline_v2.jpg" alt="ui" style="zoom:50%;" />
To introduce feature information along the temporal axis, EasyAnimate incorporates the Motion Module to achieve the expansion from 2D images to 3D videos. For better generation effects, it jointly finetunes the Backbone together with the Motion Module, thereby achieving image generation and video generation within a single Pipeline.
Additionally, referencing U-ViT, it introduces a skip connection structure into EasyAnimate to further optimize deeper features by incorporating shallow features. A fully connected layer is also zero-initialized for each skip connection structure, allowing it to be applied as a plug-in module to previously trained and well-performing DiTs.
Moreover, it proposes Slice VAE, which addresses the memory difficulties encountered by MagViT when dealing with long and large videos, while also achieving greater compression in the temporal dimension during video encoding and decoding stages compared to MagViT.
For more details, please refer to [arxiv](https://arxiv.org/abs/2405.18991).
# TODO List
- Support model with larger resolution.
- Support video inpaint model.
# Contact Us
1. Use Dingding to search group 77450006752 or Scan to join
2. You need to scan the image to join the WeChat group or if it is expired, add this student as a friend first to invite you.
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/dd.png" alt="ding group" width="30%"/>
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/wechat.jpg" alt="Wechat group" width="30%"/>
<img src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/group/person.jpg" alt="Person" width="30%"/>
# Reference
- magvit: https://github.com/google-research/magvit
- PixArt: https://github.com/PixArt-alpha/PixArt-alpha
- Open-Sora-Plan: https://github.com/PKU-YuanGroup/Open-Sora-Plan
- Open-Sora: https://github.com/hpcaitech/Open-Sora
- Animatediff: https://github.com/guoyww/AnimateDiff
# License
This project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).
|
mav23/bloomz-1b1-GGUF
|
mav23
| 2024-11-20T12:52:29Z | 5 | 0 | null |
[
"gguf",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"dataset:bigscience/xP3",
"arxiv:2211.01786",
"license:bigscience-bloom-rail-1.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T12:33:22Z |
---
datasets:
- bigscience/xP3
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
widget:
- text: "ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใWould you rate the previous review as positive, neutral or negative?"
example_title: "zh-en sentiment"
- text: "ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใไฝ ่ฎคไธบ่ฟๅฅ่ฏ็็ซๅบๆฏ่ตๆฌใไธญ็ซ่ฟๆฏๆน่ฏ๏ผ"
example_title: "zh-zh sentiment"
- text: "Suggest at least five related search terms to \"Mแบกng neural nhรขn tแบกo\"."
example_title: "vi-en query"
- text: "Proposez au moins cinq mots clรฉs concernant ยซRรฉseau de neurones artificielsยป."
example_title: "fr-fr query"
- text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
example_title: "te-en qa"
- text: "Why is the sky blue?"
example_title: "en-en qa"
- text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
example_title: "es-en fable"
- text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
example_title: "hi-en fable"
model-index:
- name: bloomz-1b1
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 52.33
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 50.49
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.04
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 51.82
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 54.75
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 53.97
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 55.16
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 33.3
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 33.5
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 34.5
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 58.93
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 65.7
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 46.59
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 40.4
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 40.12
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 39.32
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.11
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.55
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.51
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.89
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.81
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 41.29
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.93
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 37.51
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 41.37
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.19
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.63
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 2.62
- type: Pass@10
value: 6.22
- type: Pass@100
value: 11.68
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: "2016"
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 62.75
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 63.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 55.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 52.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 60.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 64.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 57.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 59.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 55.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 63.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 53.54
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 58.37
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 52.35
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 55.92
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 57.97
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 47.05
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 50.3
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 49.97
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 55.86
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 58.17
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je tโaime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใไฝ ่ฎคไธบ่ฟๅฅ่ฏ็็ซๅบๆฏ่ตๆฌใไธญ็ซ่ฟๆฏๆน่ฏ?
- Suggest at least five related search terms to "Mแบกng neural nhรขn tแบกo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-1b1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-1b1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-1b1"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom-1b1](https://huggingface.co/bigscience/bloom-1b1), also refer to the `config.json` file
- **Finetuning steps:** 250
- **Finetuning tokens:** 502 million
- **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 1x data parallel
- **Precision:** float16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
|
abdulmannan-01/qwen-2.5-1.5b-finetuned-for-json-function-calling
|
abdulmannan-01
| 2024-11-20T12:50:56Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T12:42:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/MN-Chunky-Lotus-12B-Q5_K_S-GGUF
|
Triangle104
| 2024-11-20T12:48:06Z | 9 | 1 |
transformers
|
[
"transformers",
"gguf",
"storywriting",
"text adventure",
"creative",
"story",
"writing",
"fiction",
"roleplaying",
"rp",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:FallenMerick/MN-Chunky-Lotus-12B",
"base_model:quantized:FallenMerick/MN-Chunky-Lotus-12B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T12:46:46Z |
---
license: cc-by-4.0
language:
- en
base_model: FallenMerick/MN-Chunky-Lotus-12B
library_name: transformers
tags:
- storywriting
- text adventure
- creative
- story
- writing
- fiction
- roleplaying
- rp
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/MN-Chunky-Lotus-12B-Q5_K_S-GGUF
This model was converted to GGUF format from [`FallenMerick/MN-Chunky-Lotus-12B`](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) for more details on the model.
---
Model details:
-
I had originally planned to use this model for future/further merges, but decided to go ahead and release it since it scored rather high on my local EQ Bench testing (79.58 w/ 100% parsed @ 8-bit).
Bear in mind that most models tend to score a bit higher on my own local tests as compared to their posted scores. Still, its the highest score I've personally seen from all the models I've tested.
Its a decent model, with great emotional intelligence and acceptable adherence to various character personalities. It does a good job at roleplaying despite being a bit bland at times.
Overall, I like the way it writes, but it has a few formatting issues that show up from time to time, and it has an uncommon tendency to paste walls of character feelings/intentions at the end of some outputs without any prompting. This is something I hope to correct with future iterations.
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the TIES merge method.
Models Merged
-
The following models were included in the merge:
Epiculous/Violet_Twilight-v0.2
nbeerbower/mistral-nemo-gutenberg-12B-v4
flammenai/Mahou-1.5-mistral-nemo-12B
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: Epiculous/Violet_Twilight-v0.2
parameters:
weight: 1.0
density: 1.0
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
weight: 1.0
density: 0.54
- model: flammenai/Mahou-1.5-mistral-nemo-12B
parameters:
weight: 1.0
density: 0.26
merge_method: ties
base_model: TheDrummer/Rocinante-12B-v1.1
parameters:
normalize: true
dtype: bfloat16
The idea behind this recipe was to take the long-form writing capabilities of Gutenberg, curtail it a bit with the very short output formatting of Mahou, and use Violet Twilight as an extremely solid roleplaying foundation underneath.
Rocinante is used as the base model in this merge in order to really target the delta weights from Gutenberg, since those seemed to have the highest impact on the resulting EQ of the model.
Special shoutout to @matchaaaaa for helping with testing, and for all the great model recommendations. Also, for just being an all around great person who's really inspired and motivated me to continue merging and working on models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q5_K_S-GGUF --hf-file mn-chunky-lotus-12b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q5_K_S-GGUF --hf-file mn-chunky-lotus-12b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q5_K_S-GGUF --hf-file mn-chunky-lotus-12b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q5_K_S-GGUF --hf-file mn-chunky-lotus-12b-q5_k_s.gguf -c 2048
```
|
qubvel-hf/vitpose-base
|
qubvel-hf
| 2024-11-20T12:36:18Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vitpose",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T12:36:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/MN-Chunky-Lotus-12B-Q4_K_S-GGUF
|
Triangle104
| 2024-11-20T12:30:54Z | 15 | 1 |
transformers
|
[
"transformers",
"gguf",
"storywriting",
"text adventure",
"creative",
"story",
"writing",
"fiction",
"roleplaying",
"rp",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:FallenMerick/MN-Chunky-Lotus-12B",
"base_model:quantized:FallenMerick/MN-Chunky-Lotus-12B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T12:28:54Z |
---
license: cc-by-4.0
language:
- en
base_model: FallenMerick/MN-Chunky-Lotus-12B
library_name: transformers
tags:
- storywriting
- text adventure
- creative
- story
- writing
- fiction
- roleplaying
- rp
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/MN-Chunky-Lotus-12B-Q4_K_S-GGUF
This model was converted to GGUF format from [`FallenMerick/MN-Chunky-Lotus-12B`](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) for more details on the model.
---
Model details:
-
I had originally planned to use this model for future/further merges, but decided to go ahead and release it since it scored rather high on my local EQ Bench testing (79.58 w/ 100% parsed @ 8-bit).
Bear in mind that most models tend to score a bit higher on my own local tests as compared to their posted scores. Still, its the highest score I've personally seen from all the models I've tested.
Its a decent model, with great emotional intelligence and acceptable adherence to various character personalities. It does a good job at roleplaying despite being a bit bland at times.
Overall, I like the way it writes, but it has a few formatting issues that show up from time to time, and it has an uncommon tendency to paste walls of character feelings/intentions at the end of some outputs without any prompting. This is something I hope to correct with future iterations.
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the TIES merge method.
Models Merged
-
The following models were included in the merge:
Epiculous/Violet_Twilight-v0.2
nbeerbower/mistral-nemo-gutenberg-12B-v4
flammenai/Mahou-1.5-mistral-nemo-12B
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: Epiculous/Violet_Twilight-v0.2
parameters:
weight: 1.0
density: 1.0
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
weight: 1.0
density: 0.54
- model: flammenai/Mahou-1.5-mistral-nemo-12B
parameters:
weight: 1.0
density: 0.26
merge_method: ties
base_model: TheDrummer/Rocinante-12B-v1.1
parameters:
normalize: true
dtype: bfloat16
The idea behind this recipe was to take the long-form writing capabilities of Gutenberg, curtail it a bit with the very short output formatting of Mahou, and use Violet Twilight as an extremely solid roleplaying foundation underneath.
Rocinante is used as the base model in this merge in order to really target the delta weights from Gutenberg, since those seemed to have the highest impact on the resulting EQ of the model.
Special shoutout to @matchaaaaa for helping with testing, and for all the great model recommendations. Also, for just being an all around great person who's really inspired and motivated me to continue merging and working on models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q4_K_S-GGUF --hf-file mn-chunky-lotus-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q4_K_S-GGUF --hf-file mn-chunky-lotus-12b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q4_K_S-GGUF --hf-file mn-chunky-lotus-12b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q4_K_S-GGUF --hf-file mn-chunky-lotus-12b-q4_k_s.gguf -c 2048
```
|
mav23/bloomz-7b1-mt-GGUF
|
mav23
| 2024-11-20T12:27:43Z | 108 | 0 | null |
[
"gguf",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"dataset:bigscience/xP3mt",
"arxiv:2211.01786",
"license:bigscience-bloom-rail-1.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T11:08:58Z |
---
datasets:
- bigscience/xP3mt
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
widget:
- text: "ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใWould you rate the previous review as positive, neutral or negative?"
example_title: "zh-en sentiment"
- text: "ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใไฝ ่ฎคไธบ่ฟๅฅ่ฏ็็ซๅบๆฏ่ตๆฌใไธญ็ซ่ฟๆฏๆน่ฏ๏ผ"
example_title: "zh-zh sentiment"
- text: "Suggest at least five related search terms to \"Mแบกng neural nhรขn tแบกo\"."
example_title: "vi-en query"
- text: "Proposez au moins cinq mots clรฉs concernant ยซRรฉseau de neurones artificielsยป."
example_title: "fr-fr query"
- text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
example_title: "te-en qa"
- text: "Why is the sky blue?"
example_title: "en-en qa"
- text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
example_title: "es-en fable"
- text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
example_title: "hi-en fable"
model-index:
- name: bloomz-7b1-mt
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 56.51
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 65.76
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 57.83
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 51.82
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 57.41
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 55.87
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 62.7
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 42.6
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 39.4
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 42.0
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 83.93
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 82.67
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 55.58
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 44.9
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.92
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 42.89
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.92
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 57.35
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.67
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 53.45
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.24
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.27
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 41.08
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 38.71
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 49.48
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.5
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.3
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 7.23
- type: Pass@10
value: 14.46
- type: Pass@100
value: 25.86
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: "2016"
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 89.58
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 84.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 52.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 54.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 73.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 62.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 62.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 61.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 77.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 80.0
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 83.85
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 88.82
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.26
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 80.41
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 84.58
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 51.56
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 64.26
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 71.01
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 73.06
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 85.9
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je tโaime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- ไธไธชไผ ๅฅ็ๅผ็ซฏ๏ผไธไธชไธ็ญ็็ฅ่ฏ๏ผ่ฟไธไป
ไป
ๆฏไธ้จ็ตๅฝฑ๏ผ่ๆฏไฝไธบไธไธช่ตฐ่ฟๆฐๆถไปฃ็ๆ ็ญพ๏ผๆฐธ่ฟๅฝช็ณๅฒๅใไฝ ่ฎคไธบ่ฟๅฅ่ฏ็็ซๅบๆฏ่ตๆฌใไธญ็ซ่ฟๆฏๆน่ฏ?
- Suggest at least five related search terms to "Mแบกng neural nhรขn tแบกo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz-7b1-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je tโaime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom-7b1](https://huggingface.co/bigscience/bloom-7b1), also refer to the `config.json` file
- **Finetuning steps:** 1000
- **Finetuning tokens:** 4.19 billion
- **Finetuning layout:** 1x pipeline parallel, 1x tensor parallel, 64x data parallel
- **Precision:** float16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 64 A100 80GB GPUs with 8 GPUs per node (8 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
|
Kort/i207
|
Kort
| 2024-11-20T12:06:59Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T12:03:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cristianmanosalva/NER-finetuning-BETO-CM-V1
|
cristianmanosalva
| 2024-11-20T12:06:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:biobert_json",
"base_model:NazaGara/NER-fine-tuned-BETO",
"base_model:finetune:NazaGara/NER-fine-tuned-BETO",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-11-13T09:04:44Z |
---
library_name: transformers
license: cc-by-4.0
base_model: NazaGara/NER-fine-tuned-BETO
tags:
- generated_from_trainer
datasets:
- biobert_json
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER-finetuning-BETO-CM-V1
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: biobert_json
type: biobert_json
config: Biobert_json
split: validation
args: Biobert_json
metrics:
- type: precision
value: 0.949653802801782
name: Precision
- type: recall
value: 0.9613670941099761
name: Recall
- type: f1
value: 0.9554745511003105
name: F1
- type: accuracy
value: 0.976855614973262
name: Accuracy
pipeline_tag: token-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-finetuning-BETO-CM-V1
This model is a fine-tuned version of [NazaGara/NER-fine-tuned-BETO](https://huggingface.co/NazaGara/NER-fine-tuned-BETO) on the biobert_json dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1236
- Precision: 0.9497
- Recall: 0.9614
- F1: 0.9555
- Accuracy: 0.9769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3411 | 1.0 | 612 | 0.1137 | 0.9437 | 0.9474 | 0.9456 | 0.9707 |
| 0.1072 | 2.0 | 1224 | 0.1090 | 0.9304 | 0.9685 | 0.9491 | 0.9727 |
| 0.0757 | 3.0 | 1836 | 0.1024 | 0.9450 | 0.9692 | 0.9569 | 0.9768 |
| 0.0589 | 4.0 | 2448 | 0.1050 | 0.9492 | 0.9666 | 0.9578 | 0.9774 |
| 0.0419 | 5.0 | 3060 | 0.1054 | 0.9498 | 0.9621 | 0.9559 | 0.9771 |
| 0.0365 | 6.0 | 3672 | 0.1124 | 0.9460 | 0.9583 | 0.9521 | 0.9753 |
| 0.0299 | 7.0 | 4284 | 0.1119 | 0.9495 | 0.9632 | 0.9563 | 0.9774 |
| 0.0282 | 8.0 | 4896 | 0.1187 | 0.9482 | 0.9625 | 0.9553 | 0.9771 |
| 0.0221 | 9.0 | 5508 | 0.1203 | 0.9496 | 0.9608 | 0.9551 | 0.9768 |
| 0.0192 | 10.0 | 6120 | 0.1236 | 0.9497 | 0.9614 | 0.9555 | 0.9769 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
tomaarsen/mpnet-base-nq-cgist-2-gte
|
tomaarsen
| 2024-11-20T11:53:02Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:50000",
"loss:CachedGISTEmbedLoss",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-20T11:52:34Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:50000
- loss:CachedGISTEmbedLoss
base_model: microsoft/mpnet-base
widget:
- source_sentence: who ordered the charge of the light brigade
sentences:
- Charge of the Light Brigade The Charge of the Light Brigade was a charge of British
light cavalry led by Lord Cardigan against Russian forces during the Battle of
Balaclava on 25 October 1854 in the Crimean War. Lord Raglan, overall commander
of the British forces, had intended to send the Light Brigade to prevent the Russians
from removing captured guns from overrun Turkish positions, a task well-suited
to light cavalry.
- UNICEF The United Nations International Children's Emergency Fund was created
by the United Nations General Assembly on 11 December 1946, to provide emergency
food and healthcare to children in countries that had been devastated by World
War II. The Polish physician Ludwik Rajchman is widely regarded as the founder
of UNICEF and served as its first chairman from 1946. On Rajchman's suggestion,
the American Maurice Pate was appointed its first executive director, serving
from 1947 until his death in 1965.[5][6] In 1950, UNICEF's mandate was extended
to address the long-term needs of children and women in developing countries everywhere.
In 1953 it became a permanent part of the United Nations System, and the words
"international" and "emergency" were dropped from the organization's name, making
it simply the United Nations Children's Fund, retaining the original acronym,
"UNICEF".[3]
- Marcus Jordan Marcus James Jordan (born December 24, 1990) is an American former
college basketball player who played for the UCF Knights men's basketball team
of Conference USA.[1] He is the son of retired Hall of Fame basketball player
Michael Jordan.
- source_sentence: what part of the cow is the rib roast
sentences:
- Standing rib roast A standing rib roast, also known as prime rib, is a cut of
beef from the primal rib, one of the nine primal cuts of beef. While the entire
rib section comprises ribs six through 12, a standing rib roast may contain anywhere
from two to seven ribs.
- Blaine Anderson Kurt begins to mend their relationship in "Thanksgiving", just
before New Directions loses at Sectionals to the Warblers, and they spend Christmas
together in New York City.[29][30] Though he and Kurt continue to be on good terms,
Blaine finds himself developing a crush on his best friend, Sam, which he knows
will come to nothing as he knows Sam is not gay; the two of them team up to find
evidence that the Warblers cheated at Sectionals, which means New Directions will
be competing at Regionals. He ends up going to the Sadie Hawkins dance with Tina
Cohen-Chang (Jenna Ushkowitz), who has developed a crush on him, but as friends
only.[31] When Kurt comes to Lima for the wedding of glee club director Will (Matthew
Morrison) and Emma (Jayma Mays)โwhich Emma fleesโhe and Blaine make out beforehand,
and sleep together afterward, though they do not resume a permanent relationship.[32]
- 'Soviet Union The Soviet Union (Russian: ะกะพะฒะตฬััะบะธะน ะกะพัฬะท, tr. Sovรฉtsky Soyรบz,
IPA:ย [sษหvสฒษtอกskสฒษชj sษหjus]ย (ย listen)), officially the Union of Soviet Socialist
Republics (Russian: ะกะพัฬะท ะกะพะฒะตฬััะบะธั
ะกะพัะธะฐะปะธััะธฬัะตัะบะธั
ะ ะตัะฟัฬะฑะปะธะบ, tr. Soyรบz Sovรฉtskikh
Sotsialistรญcheskikh Respรบblik, IPA:ย [sษหjus sษหvสฒษtskสฒษชx sษtsษจษlสฒษชsหtสฒitษษชskสฒษชx
rสฒษชหspublสฒษชk]ย (ย listen)), abbreviated as the USSR (Russian: ะกะกะกะ , tr. SSSR), was
a socialist state in Eurasia that existed from 1922 to 1991. Nominally a union
of multiple national Soviet republics,[a] its government and economy were highly
centralized. The country was a one-party state, governed by the Communist Party
with Moscow as its capital in its largest republic, the Russian Soviet Federative
Socialist Republic. The Russian nation had constitutionally equal status among
the many nations of the union but exerted de facto dominance in various respects.[7]
Other major urban centres were Leningrad, Kiev, Minsk, Alma-Ata and Novosibirsk.
The Soviet Union was one of the five recognized nuclear weapons states and possessed
the largest stockpile of weapons of mass destruction.[8] It was a founding permanent
member of the United Nations Security Council, as well as a member of the Organization
for Security and Co-operation in Europe (OSCE) and the leading member of the Council
for Mutual Economic Assistance (CMEA) and the Warsaw Pact.'
- source_sentence: what is the current big bang theory season
sentences:
- Byzantine army From the seventh to the 12th centuries, the Byzantine army was
among the most powerful and effective military forces in the world รขโฌโ neither
Middle Ages Europe nor (following its early successes) the fracturing Caliphate
could match the strategies and the efficiency of the Byzantine army. Restricted
to a largely defensive role in the 7th to mid-9th centuries, the Byzantines developed
the theme-system to counter the more powerful Caliphate. From the mid-9th century,
however, they gradually went on the offensive, culminating in the great conquests
of the 10th century under a series of soldier-emperors such as Nikephoros II Phokas,
John Tzimiskes and Basil II. The army they led was less reliant on the militia
of the themes; it was by now a largely professional force, with a strong and well-drilled
infantry at its core and augmented by a revived heavy cavalry arm. With one of
the most powerful economies in the world at the time, the Empire had the resources
to put to the field a powerful host when needed, in order to reclaim its long-lost
territories.
- The Big Bang Theory The Big Bang Theory is an American television sitcom created
by Chuck Lorre and Bill Prady, both of whom serve as executive producers on the
series, along with Steven Molaro. All three also serve as head writers. The show
premiered on CBS on September 24, 2007.[3] The series' tenth season premiered
on September 19, 2016.[4] In March 2017, the series was renewed for two additional
seasons, bringing its total to twelve, and running through the 2018รขโฌโ19 television
season. The eleventh season is set to premiere on September 25, 2017.[5]
- 2016 NCAA Division I Softball Tournament The 2016 NCAA Division I Softball Tournament
was held from May 20 through June 8, 2016 as the final part of the 2016 NCAA Division
I softball season. The 64 NCAA Division I college softball teams were to be selected
out of an eligible 293 teams on May 15, 2016. Thirty-two teams were awarded an
automatic bid as champions of their conference, and thirty-two teams were selected
at-large by the NCAA Division I softball selection committee. The tournament culminated
with eight teams playing in the 2016 Women's College World Series at ASA Hall
of Fame Stadium in Oklahoma City in which the Oklahoma Sooners were crowned the
champions.
- source_sentence: what happened to tates mom on days of our lives
sentences:
- 'Paige O''Hara Donna Paige Helmintoller, better known as Paige O''Hara (born May
10, 1956),[1] is an American actress, voice actress, singer and painter. O''Hara
began her career as a Broadway actress in 1983 when she portrayed Ellie May Chipley
in the musical Showboat. In 1991, she made her motion picture debut in Disney''s
Beauty and the Beast, in which she voiced the film''s heroine, Belle. Following
the critical and commercial success of Beauty and the Beast, O''Hara reprised
her role as Belle in the film''s two direct-to-video follow-ups, Beauty and the
Beast: The Enchanted Christmas and Belle''s Magical World.'
- M. Shadows Matthew Charles Sanders (born July 31, 1981), better known as M. Shadows,
is an American singer, songwriter, and musician. He is best known as the lead
vocalist, songwriter, and a founding member of the American heavy metal band Avenged
Sevenfold. In 2017, he was voted 3rd in the list of Top 25 Greatest Modern Frontmen
by Ultimate Guitar.[1]
- Theresa Donovan In July 2013, Jeannie returns to Salem, this time going by her
middle name, Theresa. Initially, she strikes up a connection with resident bad
boy JJ Deveraux (Casey Moss) while trying to secure some pot.[28] During a confrontation
with JJ and his mother Jennifer Horton (Melissa Reeves) in her office, her aunt
Kayla confirms that Theresa is in fact Jeannie and that Jen promised to hire her
as her assistant, a promise she reluctantly agrees to. Kayla reminds Theresa it
is her last chance at a fresh start.[29] Theresa also strikes up a bad first impression
with Jennifer's daughter Abigail Deveraux (Kate Mansi) when Abigail smells pot
on Theresa in her mother's office.[30] To continue to battle against Jennifer,
she teams up with Anne Milbauer (Meredith Scott Lynn) in hopes of exacting her
perfect revenge. In a ploy, Theresa reveals her intentions to hopefully woo Dr.
Daniel Jonas (Shawn Christian). After sleeping with JJ, Theresa overdoses on marijuana
and GHB. Upon hearing of their daughter's overdose and continuing problems, Shane
and Kimberly return to town in the hopes of handling their daughter's problem,
together. After believing that Theresa has a handle on her addictions, Shane and
Kimberly leave town together. Theresa then teams up with hospital co-worker Anne
Milbauer (Meredith Scott Lynn) to conspire against Jennifer, using Daniel as a
way to hurt their relationship. In early 2014, following a Narcotics Anonymous
(NA) meeting, she begins a sexual and drugged-fused relationship with Brady Black
(Eric Martsolf). In 2015, after it is found that Kristen DiMera (Eileen Davidson)
stole Theresa's embryo and carried it to term, Brady and Melanie Jonas return
her son, Christopher, to her and Brady, and the pair rename him Tate. When Theresa
moves into the Kiriakis mansion, tensions arise between her and Victor. She eventually
expresses her interest in purchasing Basic Black and running it as her own fashion
company, with financial backing from Maggie Horton (Suzanne Rogers). In the hopes
of finding the right partner, she teams up with Kate Roberts (Lauren Koslow) and
Nicole Walker (Arianne Zucker) to achieve the goal of purchasing Basic Black,
with Kate and Nicole's business background and her own interest in fashion design.
As she and Brady share several instances of rekindling their romance, she is kicked
out of the mansion by Victor; as a result, Brady quits Titan and moves in with
Theresa and Tate, in their own penthouse.
- source_sentence: where does the last name francisco come from
sentences:
- Francisco Francisco is the Spanish and Portuguese form of the masculine given
name Franciscus (corresponding to English Francis).
- 'Book of Esther The Book of Esther, also known in Hebrew as "the Scroll" (Megillah),
is a book in the third section (Ketuvim, "Writings") of the Jewish Tanakh (the
Hebrew Bible) and in the Christian Old Testament. It is one of the five Scrolls
(Megillot) in the Hebrew Bible. It relates the story of a Hebrew woman in Persia,
born as Hadassah but known as Esther, who becomes queen of Persia and thwarts
a genocide of her people. The story forms the core of the Jewish festival of Purim,
during which it is read aloud twice: once in the evening and again the following
morning. The books of Esther and Song of Songs are the only books in the Hebrew
Bible that do not explicitly mention God.[2]'
- Times Square Times Square is a major commercial intersection, tourist destination,
entertainment center and neighborhood in the Midtown Manhattan section of New
York City at the junction of Broadway and Seventh Avenue. It stretches from West
42nd to West 47th Streets.[1] Brightly adorned with billboards and advertisements,
Times Square is sometimes referred to as "The Crossroads of the World",[2] "The
Center of the Universe",[3] "the heart of The Great White Way",[4][5][6] and the
"heart of the world".[7] One of the world's busiest pedestrian areas,[8] it is
also the hub of the Broadway Theater District[9] and a major center of the world's
entertainment industry.[10] Times Square is one of the world's most visited tourist
attractions, drawing an estimated 50 million visitors annually.[11] Approximately
330,000 people pass through Times Square daily,[12] many of them tourists,[13]
while over 460,000 pedestrians walk through Times Square on its busiest days.[7]
datasets:
- sentence-transformers/natural-questions
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
co2_eq_emissions:
emissions: 59.31009589078217
energy_consumed: 0.15258500314066348
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.396
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: MPNet base trained on Natural Questions pairs
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.16
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.34
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.64
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.16
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.12
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08199999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.06
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.12166666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.24833333333333332
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.31566666666666665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.22803817515986124
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.30941269841269836
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.1655130902515993
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.52
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.62
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.78
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.52
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.36
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.364
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.322
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0336711515516074
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.06005334302891617
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.1119370784549358
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.1974683849453542
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.37302114460618035
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5887222222222221
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2524550843440785
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.28
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.52
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.28
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666663
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10800000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.064
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.28
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.48
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.51
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4358687601068153
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.38569047619047614
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3903171462871314
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.14
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.32
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.36
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.46
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.06933333333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.20319047619047617
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2276904761904762
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.32354761904761903
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2271808224609275
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.23985714285714288
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.18355553344945122
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.32
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.44
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.48
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.58
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.32
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1733333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11600000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.068
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.26
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.29
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.34
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.30497689087635044
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.39905555555555544
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.26301906759091515
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.14
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.28
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.34
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.44
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.09333333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.068
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.044000000000000004
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.14
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.28
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.34
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.44
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.27595760463916813
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.22488095238095238
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.24656541883369498
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.22
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.3
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.34
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.36
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.22
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1533333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.124
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.096
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.007116944515649617
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.01288483574625764
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.02025290517580909
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.02555956272966021
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.11695533319556885
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.2651904761904762
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.030363746300173234
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.14
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.24
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.32
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.48
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.14
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.07999999999999999
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.06400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.05
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.13
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.22
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.29
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.46
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2706566987839319
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.22174603174603175
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.22631004639318789
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.78
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.88
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.94
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.78
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.35999999999999993
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.23999999999999994
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13199999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6806666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8346666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8793333333333334
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9366666666666665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8528887039265185
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8324126984126984
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.820234632034632
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.28
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.42
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.52
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.28
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.12399999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.05866666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.14066666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.20566666666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.25566666666666665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.24909911706779386
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.38332539682539685
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.20162687946594338
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.18
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.52
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.64
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.88
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.18
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.17333333333333337
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.088
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.18
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.52
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.64
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.88
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5102396499498778
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3946269841269841
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4001733643377607
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.3
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.34
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.42
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.11999999999999998
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09200000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.055999999999999994
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.265
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.315
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.485
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3688721552089384
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3476666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.34115921547380024
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.4897959183673469
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7346938775510204
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8163265306122449
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9387755102040817
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4897959183673469
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4013605442176871
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3673469387755102
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.3102040816326531
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.036516156386696134
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.08582342270510718
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.12560656255524566
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.2064747763464094
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3575303928348819
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6281098153547133
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.27828847509729454
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.30383045525902674
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.45651491365777075
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5320251177394035
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6337519623233908
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.30383045525902674
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19702773417059127
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16148822605965465
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.11586185243328104
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16161314762466306
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2718424675131352
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.32990925813152305
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.42046541100531104
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.35163734221667803
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4015920859186165
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2922755153738202
name: Cosine Map@100
---
# MPNet base trained on Natural Questions pairs
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("tomaarsen/mpnet-base-nq-cgist-2-gte")
# Run inference
sentences = [
'where does the last name francisco come from',
'Francisco Francisco is the Spanish and Portuguese form of the masculine given name Franciscus (corresponding to English Francis).',
'Book of Esther The Book of Esther, also known in Hebrew as "the Scroll" (Megillah), is a book in the third section (Ketuvim, "Writings") of the Jewish Tanakh (the Hebrew Bible) and in the Christian Old Testament. It is one of the five Scrolls (Megillot) in the Hebrew Bible. It relates the story of a Hebrew woman in Persia, born as Hadassah but known as Esther, who becomes queen of Persia and thwarts a genocide of her people. The story forms the core of the Jewish festival of Purim, during which it is read aloud twice: once in the evening and again the following morning. The books of Esther and Song of Songs are the only books in the Hebrew Bible that do not explicitly mention God.[2]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.16 | 0.52 | 0.28 | 0.14 | 0.32 | 0.14 | 0.22 | 0.14 | 0.78 | 0.28 | 0.18 | 0.3 | 0.4898 |
| cosine_accuracy@3 | 0.34 | 0.62 | 0.5 | 0.32 | 0.44 | 0.28 | 0.3 | 0.24 | 0.88 | 0.42 | 0.52 | 0.34 | 0.7347 |
| cosine_accuracy@5 | 0.56 | 0.7 | 0.52 | 0.36 | 0.48 | 0.34 | 0.34 | 0.32 | 0.9 | 0.52 | 0.64 | 0.42 | 0.8163 |
| cosine_accuracy@10 | 0.64 | 0.78 | 0.62 | 0.46 | 0.58 | 0.44 | 0.36 | 0.48 | 0.94 | 0.62 | 0.88 | 0.5 | 0.9388 |
| cosine_precision@1 | 0.16 | 0.52 | 0.28 | 0.14 | 0.32 | 0.14 | 0.22 | 0.14 | 0.78 | 0.28 | 0.18 | 0.3 | 0.4898 |
| cosine_precision@3 | 0.12 | 0.36 | 0.1667 | 0.1333 | 0.1733 | 0.0933 | 0.1533 | 0.08 | 0.36 | 0.2267 | 0.1733 | 0.12 | 0.4014 |
| cosine_precision@5 | 0.128 | 0.364 | 0.108 | 0.1 | 0.116 | 0.068 | 0.124 | 0.064 | 0.24 | 0.2 | 0.128 | 0.092 | 0.3673 |
| cosine_precision@10 | 0.082 | 0.322 | 0.064 | 0.07 | 0.068 | 0.044 | 0.096 | 0.05 | 0.132 | 0.124 | 0.088 | 0.056 | 0.3102 |
| cosine_recall@1 | 0.06 | 0.0337 | 0.28 | 0.0693 | 0.16 | 0.14 | 0.0071 | 0.13 | 0.6807 | 0.0587 | 0.18 | 0.265 | 0.0365 |
| cosine_recall@3 | 0.1217 | 0.0601 | 0.48 | 0.2032 | 0.26 | 0.28 | 0.0129 | 0.22 | 0.8347 | 0.1407 | 0.52 | 0.315 | 0.0858 |
| cosine_recall@5 | 0.2483 | 0.1119 | 0.51 | 0.2277 | 0.29 | 0.34 | 0.0203 | 0.29 | 0.8793 | 0.2057 | 0.64 | 0.4 | 0.1256 |
| cosine_recall@10 | 0.3157 | 0.1975 | 0.6 | 0.3235 | 0.34 | 0.44 | 0.0256 | 0.46 | 0.9367 | 0.2557 | 0.88 | 0.485 | 0.2065 |
| **cosine_ndcg@10** | **0.228** | **0.373** | **0.4359** | **0.2272** | **0.305** | **0.276** | **0.117** | **0.2707** | **0.8529** | **0.2491** | **0.5102** | **0.3689** | **0.3575** |
| cosine_mrr@10 | 0.3094 | 0.5887 | 0.3857 | 0.2399 | 0.3991 | 0.2249 | 0.2652 | 0.2217 | 0.8324 | 0.3833 | 0.3946 | 0.3477 | 0.6281 |
| cosine_map@100 | 0.1655 | 0.2525 | 0.3903 | 0.1836 | 0.263 | 0.2466 | 0.0304 | 0.2263 | 0.8202 | 0.2016 | 0.4002 | 0.3412 | 0.2783 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3038 |
| cosine_accuracy@3 | 0.4565 |
| cosine_accuracy@5 | 0.532 |
| cosine_accuracy@10 | 0.6338 |
| cosine_precision@1 | 0.3038 |
| cosine_precision@3 | 0.197 |
| cosine_precision@5 | 0.1615 |
| cosine_precision@10 | 0.1159 |
| cosine_recall@1 | 0.1616 |
| cosine_recall@3 | 0.2718 |
| cosine_recall@5 | 0.3299 |
| cosine_recall@10 | 0.4205 |
| **cosine_ndcg@10** | **0.3516** |
| cosine_mrr@10 | 0.4016 |
| cosine_map@100 | 0.2923 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 50,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.74 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 137.2 tokens</li><li>max: 508 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who is required to report according to the hmda</code> | <code>Home Mortgage Disclosure Act US financial institutions must report HMDA data to their regulator if they meet certain criteria, such as having assets above a specific threshold. The criteria is different for depository and non-depository institutions and are available on the FFIEC website.[4] In 2012, there were 7,400 institutions that reported a total of 18.7 million HMDA records.[5]</code> |
| <code>what is the definition of endoplasmic reticulum in biology</code> | <code>Endoplasmic reticulum The endoplasmic reticulum (ER) is a type of organelle in eukaryotic cells that forms an interconnected network of flattened, membrane-enclosed sacs or tube-like structures known as cisternae. The membranes of the ER are continuous with the outer nuclear membrane. The endoplasmic reticulum occurs in most types of eukaryotic cells, but is absent from red blood cells and spermatozoa. There are two types of endoplasmic reticulum: rough and smooth. The outer (cytosolic) face of the rough endoplasmic reticulum is studded with ribosomes that are the sites of protein synthesis. The rough endoplasmic reticulum is especially prominent in cells such as hepatocytes. The smooth endoplasmic reticulum lacks ribosomes and functions in lipid manufacture and metabolism, the production of steroid hormones, and detoxification.[1] The smooth ER is especially abundant in mammalian liver and gonad cells. The lacy membranes of the endoplasmic reticulum were first seen in 1945 using elect...</code> |
| <code>what does the ski mean in polish names</code> | <code>Polish name Since the High Middle Ages, Polish-sounding surnames ending with the masculine -ski suffix, including -cki and -dzki, and the corresponding feminine suffix -ska/-cka/-dzka were associated with the nobility (Polish szlachta), which alone, in the early years, had such suffix distinctions.[1] They are widely popular today.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 100,231 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.78 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 135.64 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>difference between russian blue and british blue cat</code> | <code>Russian Blue The coat is known as a "double coat", with the undercoat being soft, downy and equal in length to the guard hairs, which are an even blue with silver tips. However, the tail may have a few very dull, almost unnoticeable stripes. The coat is described as thick, plush and soft to the touch. The feeling is softer than the softest silk. The silver tips give the coat a shimmering appearance. Its eyes are almost always a dark and vivid green. Any white patches of fur or yellow eyes in adulthood are seen as flaws in show cats.[3] Russian Blues should not be confused with British Blues (which are not a distinct breed, but rather a British Shorthair with a blue coat as the British Shorthair breed itself comes in a wide variety of colors and patterns), nor the Chartreux or Korat which are two other naturally occurring breeds of blue cats, although they have similar traits.</code> |
| <code>who played the little girl on mrs doubtfire</code> | <code>Mara Wilson Mara Elizabeth Wilson[2] (born July 24, 1987) is an American writer and former child actress. She is known for playing Natalie Hillard in Mrs. Doubtfire (1993), Susan Walker in Miracle on 34th Street (1994), Matilda Wormwood in Matilda (1996) and Lily Stone in Thomas and the Magic Railroad (2000). Since retiring from film acting, Wilson has focused on writing.</code> |
| <code>what year did the movie the sound of music come out</code> | <code>The Sound of Music (film) The film was released on March 2, 1965 in the United States, initially as a limited roadshow theatrical release. Although critical response to the film was widely mixed, the film was a major commercial success, becoming the number one box office movie after four weeks, and the highest-grossing film of 1965. By November 1966, The Sound of Music had become the highest-grossing film of all-timeโsurpassing Gone with the Windโand held that distinction for five years. The film was just as popular throughout the world, breaking previous box-office records in twenty-nine countries. Following an initial theatrical release that lasted four and a half years, and two successful re-releases, the film sold 283 million admissions worldwide and earned a total worldwide gross of $286,000,000.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:-----:|:----:|:-------------:|:---------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.04 | 1 | 15.537 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2 | 5 | 11.6576 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4 | 10 | 7.1392 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6 | 15 | 5.0005 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8 | 20 | 4.0541 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0 | 25 | 3.4117 | 2.3797 | 0.2280 | 0.3730 | 0.4359 | 0.2272 | 0.3050 | 0.2760 | 0.1170 | 0.2707 | 0.8529 | 0.2491 | 0.5102 | 0.3689 | 0.3575 | 0.3516 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.153 kWh
- **Carbon Emitted**: 0.059 kg of CO2
- **Hours Used**: 0.396 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.4.0.dev0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.35.0.dev0
- Datasets: 2.20.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF
|
mradermacher
| 2024-11-20T11:52:20Z | 164 | 1 |
transformers
|
[
"transformers",
"gguf",
"gemma2",
"instruct",
"bggpt",
"insait",
"bg",
"en",
"base_model:INSAIT-Institute/BgGPT-Gemma-2-9B-IT-v1.0",
"base_model:quantized:INSAIT-Institute/BgGPT-Gemma-2-9B-IT-v1.0",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-20T02:56:26Z |
---
base_model: INSAIT-Institute/BgGPT-Gemma-2-9B-IT-v1.0
language:
- bg
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- gemma2
- instruct
- bggpt
- insait
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/INSAIT-Institute/BgGPT-Gemma-2-9B-IT-v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/BgGPT-Gemma-2-9B-IT-v1.0-i1-GGUF/resolve/main/BgGPT-Gemma-2-9B-IT-v1.0.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/granite-3.0-2b-instruct-i1-GGUF
|
mradermacher
| 2024-11-20T11:52:20Z | 44 | 0 |
transformers
|
[
"transformers",
"gguf",
"language",
"granite-3.0",
"en",
"base_model:ibm-granite/granite-3.0-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.0-2b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-20T10:25:53Z |
---
base_model: ibm-granite/granite-3.0-2b-instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- language
- granite-3.0
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ibm-granite/granite-3.0-2b-instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/granite-3.0-2b-instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 1.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 1.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.6 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.6 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 1.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.0-2b-instruct-i1-GGUF/resolve/main/granite-3.0-2b-instruct.i1-Q6_K.gguf) | i1-Q6_K | 2.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Kort/i206
|
Kort
| 2024-11-20T11:50:44Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T11:47:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/YamShadow-7B-GGUF
|
mradermacher
| 2024-11-20T11:43:37Z | 32 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"en",
"base_model:automerger/YamShadow-7B",
"base_model:quantized:automerger/YamShadow-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-20T09:22:09Z |
---
base_model: automerger/YamShadow-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/automerger/YamShadow-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/YamShadow-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/YamShadow-7B-GGUF/resolve/main/YamShadow-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
arthd24/pegasus_informative_trial_10k
|
arthd24
| 2024-11-20T11:37:34Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"pegasus",
"text2text-generation",
"generated_from_keras_callback",
"base_model:thonyyy/pegasus_indonesian_base-finetune",
"base_model:finetune:thonyyy/pegasus_indonesian_base-finetune",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-11-20T05:52:56Z |
---
library_name: transformers
license: apache-2.0
base_model: thonyyy/pegasus_indonesian_base-finetune
tags:
- generated_from_keras_callback
model-index:
- name: arthd24/pegasus_informative_trial_10k
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arthd24/pegasus_informative_trial_10k
This model is a fine-tuned version of [thonyyy/pegasus_indonesian_base-finetune](https://huggingface.co/thonyyy/pegasus_indonesian_base-finetune) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0309
- Validation Loss: 1.2656
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1622 | 1.2417 | 0 |
| 1.0914 | 1.2541 | 1 |
| 1.0309 | 1.2656 | 2 |
### Framework versions
- Transformers 4.45.1
- TensorFlow 2.16.1
- Datasets 3.0.1
- Tokenizers 0.20.0
|
HarshGT/HSGPT_Merged
|
HarshGT
| 2024-11-20T11:37:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T11:31:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bunnycore/Qandora-2.5-7B-Creative
|
bunnycore
| 2024-11-20T11:26:53Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:bunnycore/Qandora-2.5-7B",
"base_model:merge:bunnycore/Qandora-2.5-7B",
"base_model:bunnycore/Qwen-2.1-7b-Persona-lora_model",
"base_model:merge:bunnycore/Qwen-2.1-7b-Persona-lora_model",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T11:22:46Z |
---
base_model:
- bunnycore/Qandora-2.5-7B
- bunnycore/Qwen-2.1-7b-Persona-lora_model
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method using [bunnycore/Qandora-2.5-7B](https://huggingface.co/bunnycore/Qandora-2.5-7B) + [bunnycore/Qwen-2.1-7b-Persona-lora_model](https://huggingface.co/bunnycore/Qwen-2.1-7b-Persona-lora_model) as a base.
### Models Merged
The following models were included in the merge:
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: bunnycore/Qandora-2.5-7B+bunnycore/Qwen-2.1-7b-Persona-lora_model
dtype: bfloat16
merge_method: passthrough
models:
- model: bunnycore/Qandora-2.5-7B+bunnycore/Qwen-2.1-7b-Persona-lora_model
```
|
wath5/kgl_lmsys_pref_classif
|
wath5
| 2024-11-20T11:23:40Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-classification",
"en",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-classification
| 2024-11-20T09:43:55Z |
---
library_name: transformers
language:
- en
base_model:
- google/gemma-2-9b-it
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Given a (Query, ModelAAnswer, ModelBAnswer)
This model gives a vector in 3D like lMSYS (ModelAWin Proba), (ModelBWin Proba), (Tie Proba)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** @sayoulala (Yang Zhou)
- **Model type:** Gemma for Sentence Classification
- **Language(s) (NLP):** English Only
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
Mimic human preference given a query and 2 different answers.
### Direct Use
```python
import torch
from torch import nn
from torch.nn import CrossEntropyLoss, MSELoss, BCEWithLogitsLoss
from transformers import Gemma2PreTrainedModel,Gemma2Model, Cache, AutoTokenizer
from transformers.modeling_outputs import SequenceClassifierOutputWithPast
from typing import Optional, List, Union, Tuple
from dataclasses import dataclass
@dataclass
class Config:
gemma_dir = 'wath5/kgl_lmsys_pref_classif'
max_length = 2000
batch_size = 8
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
cfg = Config()
class Gemma2ForSequenceClassificationV1(Gemma2PreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.model = Gemma2Model(config)
self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.model.embed_tokens
def set_input_embeddings(self, value):
self.model.embed_tokens = value
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, SequenceClassifierOutputWithPast]:
r"""
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
transformer_outputs = self.model(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_values=past_key_values,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = transformer_outputs[0]
# logits = self.score(hidden_states)
if input_ids is not None:
batch_size = input_ids.shape[0]
else:
batch_size = inputs_embeds.shape[0]
if self.config.pad_token_id is None and batch_size != 1:
raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
if self.config.pad_token_id is None:
sequence_lengths = -1
else:
if input_ids is not None:
# if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
sequence_lengths = sequence_lengths % input_ids.shape[-1]
sequence_lengths = sequence_lengths.to(hidden_states.device)
else:
sequence_lengths = -1
hidden_states = hidden_states[
torch.arange(batch_size, device=hidden_states.device), sequence_lengths] # eos
pooled_logits = self.score(hidden_states)
return pooled_logits
tokenizer = AutoTokenizer.from_pretrained("/kaggle/input/v7-dpo-16bit-01234-8bit-all/v7_dpo_16bit_01234_8bit_all")
model = Gemma2ForSequenceClassificationV1.from_pretrained(
cfg.gemma_dir,
num_labels=3,
device_map=cfg.device,
use_cache=False,
)
model.config.pad_token_id = tokenizer.pad_token_id
```
## How to Get Started with the Model
```python
from transformers.data.data_collator import pad_without_fast_tokenizer_warning
@torch.no_grad()
def single_prompt_inference(prompt, model, device, max_length=cfg.max_length):
"""
Perform inference on a single prompt.
Args:
prompt (str): The input prompt for inference.
model (torch.nn.Module): The model used for inference.
device (torch.device): The device to run inference on.
tokenizer (Tokenizer): Tokenizer for preprocessing input text.
max_length (int): Maximum sequence length for tokenization.
Returns:
dict: Probabilities for "a_win", "b_win", and "tie".
"""
# Tokenize the input prompt
input_ids = tokenizer(prompt, truncation=True, max_length=max_length)['input_ids']
input_ids.append(tokenizer.eos_token_id) # Add EOS token if needed
# Prepare inputs
inputs = pad_without_fast_tokenizer_warning(
tokenizer,
{"input_ids": [input_ids]}, # Wrap in a list for compatibility
padding="max_length",
pad_to_multiple_of=None,
max_length=max_length,
return_tensors="pt",
)
# Move inputs to the appropriate device
inputs = inputs.to(device)
# Run the model
outputs = model(**inputs)
# Get probabilities using softmax
proba = outputs.softmax(-1).cpu().squeeze()
return {
"winner_model_a": proba[0].item(),
"winner_model_b": proba[1].item(),
"tie": proba[2].item(),
}
def create_rounds(query: str,
answer_a: str,
answer_b: str) -> str:
prompt =f"""User question:
\"""{query}\"""
Answer A:
\"""{answer_a}\"""
Answer B:
\"""{answer_b}\"""
"""
return prompt
query = "Hello, what is the height of the reassembled blind product?"
answer_a = "Vous pouvez trouver toutes les informations techniques, y compris la hauteur du produit store remontรฉ, directement sur la fiche produit de notre site. Cliquez sur l'onglet 'Produits' dnas la barre de navigation ou utilisez le moteur de recherche pour accรฉder au produit recherchรฉ. Avez vous une autre question ?"
answer_b = "The height of the aluminum Venetian blind is 130 cm."
prompt_direct = create_rounds(query, answer_a, answer_b)
single_prompt_inference(prompt_direct, model=model, device=cfg.device)
```
## Training Details
https://github.com/shyoulala/LMSYS_BlackPearl
|
Triangle104/MN-Violet-Lotus-12B-Q6_K-GGUF
|
Triangle104
| 2024-11-20T11:20:09Z | 11 | 1 |
transformers
|
[
"transformers",
"gguf",
"storywriting",
"text adventure",
"creative",
"story",
"writing",
"fiction",
"roleplaying",
"rp",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:FallenMerick/MN-Violet-Lotus-12B",
"base_model:quantized:FallenMerick/MN-Violet-Lotus-12B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-20T11:18:40Z |
---
license: cc-by-4.0
language:
- en
base_model: FallenMerick/MN-Violet-Lotus-12B
library_name: transformers
tags:
- storywriting
- text adventure
- creative
- story
- writing
- fiction
- roleplaying
- rp
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/MN-Violet-Lotus-12B-Q6_K-GGUF
This model was converted to GGUF format from [`FallenMerick/MN-Violet-Lotus-12B`](https://huggingface.co/FallenMerick/MN-Violet-Lotus-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FallenMerick/MN-Violet-Lotus-12B) for more details on the model.
---
Model details:
-
This is the model I was trying to create when Chunky-Lotus emerged. Not only does this model score higher on my local EQ benchmarks (80.00 w/ 100% parsed @ 8-bit), but it does an even better job at roleplaying and producing creative outputs while still adhering to wide ranges of character personalities. The high levels of emotional intelligence are really quite noticeable as well.
Once again, models tend to score higher on my local tests when compared to their posted scores, but this has become the new high score for models I've personally tested.
I really like the way this model writes, and I hope you'll enjoy using it as well!
Merge Details
-
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the Model Stock merge method.
Models Merged
The following models were included in the merge:
Epiculous/Violet_Twilight-v0.2
NeverSleep/Lumimaid-v0.2-12B
flammenai/Mahou-1.5-mistral-nemo-12B
Sao10K/MN-12B-Lyra-v4
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: FallenMerick/MN-Twilight-Maid-SLERP-12B #(unreleased)
- model: Sao10K/MN-12B-Lyra-v4
- model: flammenai/Mahou-1.5-mistral-nemo-12B
merge_method: model_stock
base_model: mistralai/Mistral-Nemo-Instruct-2407
parameters:
normalize: true
dtype: bfloat16
In this recipe, Violet Twilight and Lumimaid were first blended using the SLERP method to create a strong roleplaying foundation. Lyra v4 is then added to the mix for its great creativity and roleplaying performance, along with Mahou to once again curtail the outputs and prevent the resulting model from becoming too wordy. Model Stock was used for the final merge in order to really push the resulting weights in the proper direction while using Nemo Instruct as a strong anchor point.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/MN-Violet-Lotus-12B-Q6_K-GGUF --hf-file mn-violet-lotus-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/MN-Violet-Lotus-12B-Q6_K-GGUF --hf-file mn-violet-lotus-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/MN-Violet-Lotus-12B-Q6_K-GGUF --hf-file mn-violet-lotus-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/MN-Violet-Lotus-12B-Q6_K-GGUF --hf-file mn-violet-lotus-12b-q6_k.gguf -c 2048
```
|
Darkknight535/MS-Moonlight-22B-v3-Q8_0-GGUF
|
Darkknight535
| 2024-11-20T11:15:56Z | 44 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Darkknight535/MS-Moonlight-22B-v3",
"base_model:quantized:Darkknight535/MS-Moonlight-22B-v3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-20T11:14:17Z |
---
base_model: Darkknight535/MS-Moonlight-22B-v3
tags:
- llama-cpp
- gguf-my-repo
---
# Darkknight535/MS-Moonlight-22B-v3-Q8_0-GGUF
This model was converted to GGUF format from [`Darkknight535/MS-Moonlight-22B-v3`](https://huggingface.co/Darkknight535/MS-Moonlight-22B-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Darkknight535/MS-Moonlight-22B-v3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Darkknight535/MS-Moonlight-22B-v3-Q8_0-GGUF --hf-file ms-moonlight-22b-v3-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Darkknight535/MS-Moonlight-22B-v3-Q8_0-GGUF --hf-file ms-moonlight-22b-v3-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Darkknight535/MS-Moonlight-22B-v3-Q8_0-GGUF --hf-file ms-moonlight-22b-v3-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Darkknight535/MS-Moonlight-22B-v3-Q8_0-GGUF --hf-file ms-moonlight-22b-v3-q8_0.gguf -c 2048
```
|
BlackBeenie/Bloslain-8B-v0.2
|
BlackBeenie
| 2024-11-20T11:11:44Z | 12 | 1 | null |
[
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"BlackBeenie/Neos-Llama-3.1-8B",
"Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder",
"Solshine/reflection-llama-3.1-8B",
"base_model:BlackBeenie/Neos-Llama-3.1-8B",
"base_model:merge:BlackBeenie/Neos-Llama-3.1-8B",
"base_model:Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder",
"base_model:merge:Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder",
"base_model:Solshine/reflection-llama-3.1-8B",
"base_model:merge:Solshine/reflection-llama-3.1-8B",
"model-index",
"region:us"
] | null | 2024-11-19T03:57:57Z |
---
tags:
- merge
- mergekit
- lazymergekit
- BlackBeenie/Neos-Llama-3.1-8B
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- Solshine/reflection-llama-3.1-8B
base_model:
- BlackBeenie/Neos-Llama-3.1-8B
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- Solshine/reflection-llama-3.1-8B
model-index:
- name: Bloslain-8B-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 50.23
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/Bloslain-8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/Bloslain-8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 14.5
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/Bloslain-8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.49
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/Bloslain-8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.45
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/Bloslain-8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.48
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=BlackBeenie/Bloslain-8B-v0.2
name: Open LLM Leaderboard
---
# Bloslain-8B-v0.2
Bloslain-8B-v0.2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [BlackBeenie/Neos-Llama-3.1-8B](https://huggingface.co/BlackBeenie/Neos-Llama-3.1-8B)
* [Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder](https://huggingface.co/Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder)
* [Solshine/reflection-llama-3.1-8B](https://huggingface.co/Solshine/reflection-llama-3.1-8B)
## ๐งฉ Configuration
```yaml
models:
- model: NousResearch/Meta-Llama-3.1-8B-Instruct
# No parameters necessary for base model
- model: BlackBeenie/Neos-Llama-3.1-8B
parameters:
density: 0.53
weight: 0.4
- model: Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
parameters:
density: 0.53
weight: 0.3
- model: Solshine/reflection-llama-3.1-8B
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
parameters:
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "BlackBeenie/Bloslain-8B-v0.2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BlackBeenie__Bloslain-8B-v0.2)
| Metric |Value|
|-------------------|----:|
|Avg. |23.80|
|IFEval (0-Shot) |50.23|
|BBH (3-Shot) |30.66|
|MATH Lvl 5 (4-Shot)|14.50|
|GPQA (0-shot) | 7.49|
|MuSR (0-shot) |10.45|
|MMLU-PRO (5-shot) |29.48|
|
osmanh/LLama-3.2-1B-Quantiz-text-generation
|
osmanh
| 2024-11-20T11:10:34Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-20T11:04:33Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wassimm/llama-3-1-8b-Instruct-bnb-4bit-psycology-w
|
wassimm
| 2024-11-20T11:08:26Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-20T10:21:17Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wassimm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MayBashendy/Arabic_FineTuningAraBERT_AugV4_k15_task3_organization_fold0
|
MayBashendy
| 2024-11-20T11:08:03Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-12T23:49:14Z |
---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV4_k15_task3_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV4_k15_task3_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9951
- Qwk: 0.0892
- Mse: 0.9951
- Rmse: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0052 | 2 | 5.1585 | 0.0 | 5.1585 | 2.2712 |
| No log | 0.0105 | 4 | 2.8572 | -0.0577 | 2.8572 | 1.6903 |
| No log | 0.0157 | 6 | 1.1924 | 0.1895 | 1.1924 | 1.0920 |
| No log | 0.0210 | 8 | 0.7981 | 0.0 | 0.7981 | 0.8934 |
| No log | 0.0262 | 10 | 0.9258 | -0.2222 | 0.9258 | 0.9622 |
| No log | 0.0315 | 12 | 1.2159 | -0.1000 | 1.2159 | 1.1027 |
| No log | 0.0367 | 14 | 1.1946 | -0.3538 | 1.1946 | 1.0930 |
| No log | 0.0420 | 16 | 1.2013 | -0.1786 | 1.2013 | 1.0960 |
| No log | 0.0472 | 18 | 1.0582 | -0.0694 | 1.0582 | 1.0287 |
| No log | 0.0525 | 20 | 0.9011 | 0.0 | 0.9011 | 0.9492 |
| No log | 0.0577 | 22 | 0.9211 | 0.0 | 0.9211 | 0.9597 |
| No log | 0.0630 | 24 | 1.0786 | -0.2222 | 1.0786 | 1.0385 |
| No log | 0.0682 | 26 | 1.4372 | 0.0 | 1.4372 | 1.1988 |
| No log | 0.0735 | 28 | 1.3939 | 0.0 | 1.3939 | 1.1806 |
| No log | 0.0787 | 30 | 0.9801 | 0.2667 | 0.9801 | 0.9900 |
| No log | 0.0840 | 32 | 0.7175 | 0.0 | 0.7175 | 0.8470 |
| No log | 0.0892 | 34 | 0.7295 | 0.0 | 0.7295 | 0.8541 |
| No log | 0.0945 | 36 | 0.7406 | 0.0 | 0.7406 | 0.8606 |
| No log | 0.0997 | 38 | 0.8721 | 0.3889 | 0.8721 | 0.9338 |
| No log | 0.1050 | 40 | 1.0357 | 0.1270 | 1.0357 | 1.0177 |
| No log | 0.1102 | 42 | 0.9271 | 0.3210 | 0.9271 | 0.9628 |
| No log | 0.1155 | 44 | 0.7938 | 0.0 | 0.7938 | 0.8909 |
| No log | 0.1207 | 46 | 0.7414 | 0.0 | 0.7414 | 0.8611 |
| No log | 0.1260 | 48 | 0.7661 | 0.0 | 0.7661 | 0.8753 |
| No log | 0.1312 | 50 | 0.9624 | 0.2924 | 0.9624 | 0.9810 |
| No log | 0.1365 | 52 | 0.9366 | 0.1037 | 0.9366 | 0.9678 |
| No log | 0.1417 | 54 | 0.8864 | -0.0185 | 0.8864 | 0.9415 |
| No log | 0.1470 | 56 | 1.0448 | 0.1444 | 1.0448 | 1.0221 |
| No log | 0.1522 | 58 | 1.0542 | 0.1270 | 1.0542 | 1.0268 |
| No log | 0.1575 | 60 | 0.8958 | -0.0342 | 0.8958 | 0.9465 |
| No log | 0.1627 | 62 | 0.7927 | 0.0 | 0.7927 | 0.8903 |
| No log | 0.1680 | 64 | 0.7804 | 0.0 | 0.7804 | 0.8834 |
| No log | 0.1732 | 66 | 0.8005 | 0.0 | 0.8005 | 0.8947 |
| No log | 0.1785 | 68 | 0.9649 | 0.1852 | 0.9649 | 0.9823 |
| No log | 0.1837 | 70 | 1.3201 | 0.0 | 1.3201 | 1.1490 |
| No log | 0.1890 | 72 | 1.7101 | 0.0 | 1.7101 | 1.3077 |
| No log | 0.1942 | 74 | 1.5078 | 0.0 | 1.5078 | 1.2279 |
| No log | 0.1995 | 76 | 1.0546 | -0.1000 | 1.0546 | 1.0269 |
| No log | 0.2047 | 78 | 0.9158 | 0.2092 | 0.9158 | 0.9570 |
| No log | 0.2100 | 80 | 0.8941 | 0.4296 | 0.8941 | 0.9456 |
| No log | 0.2152 | 82 | 0.9223 | 0.3889 | 0.9223 | 0.9604 |
| No log | 0.2205 | 84 | 1.0538 | -0.0864 | 1.0538 | 1.0266 |
| No log | 0.2257 | 86 | 1.0547 | 0.0351 | 1.0547 | 1.0270 |
| No log | 0.2310 | 88 | 0.8609 | 0.4296 | 0.8609 | 0.9279 |
| No log | 0.2362 | 90 | 0.9217 | 0.2361 | 0.9217 | 0.9601 |
| No log | 0.2415 | 92 | 1.4220 | 0.0 | 1.4220 | 1.1925 |
| No log | 0.2467 | 94 | 1.4854 | 0.0 | 1.4854 | 1.2188 |
| No log | 0.2520 | 96 | 0.9959 | 0.0351 | 0.9959 | 0.9979 |
| No log | 0.2572 | 98 | 0.9345 | 0.1852 | 0.9345 | 0.9667 |
| No log | 0.2625 | 100 | 1.4116 | 0.0 | 1.4116 | 1.1881 |
| No log | 0.2677 | 102 | 1.5023 | 0.0 | 1.5023 | 1.2257 |
| No log | 0.2730 | 104 | 1.6127 | 0.0 | 1.6127 | 1.2699 |
| No log | 0.2782 | 106 | 1.4688 | 0.1270 | 1.4688 | 1.2120 |
| No log | 0.2835 | 108 | 0.9138 | 0.1852 | 0.9138 | 0.9559 |
| No log | 0.2887 | 110 | 0.8849 | 0.1037 | 0.8849 | 0.9407 |
| No log | 0.2940 | 112 | 0.8854 | -0.0185 | 0.8854 | 0.9410 |
| No log | 0.2992 | 114 | 0.8904 | 0.1270 | 0.8904 | 0.9436 |
| No log | 0.3045 | 116 | 1.2375 | 0.0 | 1.2375 | 1.1124 |
| No log | 0.3097 | 118 | 1.4920 | 0.0 | 1.4920 | 1.2215 |
| No log | 0.3150 | 120 | 1.3126 | 0.0 | 1.3126 | 1.1457 |
| No log | 0.3202 | 122 | 0.8399 | 0.2092 | 0.8399 | 0.9165 |
| No log | 0.3255 | 124 | 0.8407 | 0.2092 | 0.8407 | 0.9169 |
| No log | 0.3307 | 126 | 1.2377 | 0.1270 | 1.2377 | 1.1125 |
| No log | 0.3360 | 128 | 1.7342 | 0.0 | 1.7342 | 1.3169 |
| No log | 0.3412 | 130 | 1.7192 | 0.0 | 1.7192 | 1.3112 |
| No log | 0.3465 | 132 | 1.3658 | 0.1270 | 1.3658 | 1.1687 |
| No log | 0.3517 | 134 | 0.8717 | 0.2924 | 0.8717 | 0.9337 |
| No log | 0.3570 | 136 | 0.6946 | 0.1538 | 0.6946 | 0.8334 |
| No log | 0.3622 | 138 | 0.7223 | 0.1270 | 0.7223 | 0.8499 |
| No log | 0.3675 | 140 | 1.0067 | 0.2667 | 1.0067 | 1.0033 |
| No log | 0.3727 | 142 | 1.2800 | 0.0 | 1.2800 | 1.1314 |
| No log | 0.3780 | 144 | 1.4809 | 0.0 | 1.4809 | 1.2169 |
| No log | 0.3832 | 146 | 1.4458 | 0.0 | 1.4458 | 1.2024 |
| No log | 0.3885 | 148 | 1.3449 | 0.0 | 1.3449 | 1.1597 |
| No log | 0.3937 | 150 | 1.2736 | 0.0 | 1.2736 | 1.1285 |
| No log | 0.3990 | 152 | 1.1680 | 0.1270 | 1.1680 | 1.0807 |
| No log | 0.4042 | 154 | 1.4844 | 0.0 | 1.4844 | 1.2184 |
| No log | 0.4094 | 156 | 1.4872 | 0.0 | 1.4872 | 1.2195 |
| No log | 0.4147 | 158 | 0.9314 | 0.3529 | 0.9314 | 0.9651 |
| No log | 0.4199 | 160 | 0.7977 | 0.3529 | 0.7977 | 0.8931 |
| No log | 0.4252 | 162 | 1.1665 | 0.1270 | 1.1665 | 1.0800 |
| No log | 0.4304 | 164 | 1.8253 | -0.0087 | 1.8253 | 1.3510 |
| No log | 0.4357 | 166 | 1.8819 | -0.0154 | 1.8819 | 1.3718 |
| No log | 0.4409 | 168 | 1.4302 | 0.0 | 1.4302 | 1.1959 |
| No log | 0.4462 | 170 | 0.9288 | 0.3529 | 0.9287 | 0.9637 |
| No log | 0.4514 | 172 | 0.8775 | 0.3529 | 0.8775 | 0.9368 |
| No log | 0.4567 | 174 | 1.0389 | 0.2924 | 1.0389 | 1.0193 |
| No log | 0.4619 | 176 | 1.3262 | 0.0 | 1.3262 | 1.1516 |
| No log | 0.4672 | 178 | 1.2896 | 0.0106 | 1.2896 | 1.1356 |
| No log | 0.4724 | 180 | 0.9069 | -0.0593 | 0.9069 | 0.9523 |
| No log | 0.4777 | 182 | 0.8331 | -0.0476 | 0.8331 | 0.9128 |
| No log | 0.4829 | 184 | 0.9903 | 0.1852 | 0.9903 | 0.9951 |
| No log | 0.4882 | 186 | 1.2051 | 0.0106 | 1.2051 | 1.0978 |
| No log | 0.4934 | 188 | 1.4503 | 0.0 | 1.4503 | 1.2043 |
| No log | 0.4987 | 190 | 1.3204 | 0.0 | 1.3204 | 1.1491 |
| No log | 0.5039 | 192 | 1.0350 | 0.1852 | 1.0350 | 1.0174 |
| No log | 0.5092 | 194 | 0.9435 | 0.0654 | 0.9435 | 0.9713 |
| No log | 0.5144 | 196 | 1.0457 | 0.1852 | 1.0457 | 1.0226 |
| No log | 0.5197 | 198 | 1.2034 | 0.0351 | 1.2034 | 1.0970 |
| No log | 0.5249 | 200 | 1.4355 | 0.0 | 1.4355 | 1.1981 |
| No log | 0.5302 | 202 | 1.4400 | 0.0106 | 1.4400 | 1.2000 |
| No log | 0.5354 | 204 | 1.4913 | 0.0106 | 1.4913 | 1.2212 |
| No log | 0.5407 | 206 | 1.5593 | 0.0106 | 1.5593 | 1.2487 |
| No log | 0.5459 | 208 | 1.8235 | 0.0 | 1.8235 | 1.3504 |
| No log | 0.5512 | 210 | 1.9656 | 0.0 | 1.9656 | 1.4020 |
| No log | 0.5564 | 212 | 1.2205 | 0.1444 | 1.2205 | 1.1047 |
| No log | 0.5617 | 214 | 0.7094 | 0.1538 | 0.7094 | 0.8422 |
| No log | 0.5669 | 216 | 0.7089 | 0.4762 | 0.7089 | 0.8420 |
| No log | 0.5722 | 218 | 0.9571 | 0.1444 | 0.9571 | 0.9783 |
| No log | 0.5774 | 220 | 1.4906 | 0.0 | 1.4906 | 1.2209 |
| No log | 0.5827 | 222 | 1.6810 | 0.0 | 1.6810 | 1.2965 |
| No log | 0.5879 | 224 | 1.4617 | 0.0 | 1.4617 | 1.2090 |
| No log | 0.5932 | 226 | 0.9090 | 0.2924 | 0.9090 | 0.9534 |
| No log | 0.5984 | 228 | 0.7757 | 0.3210 | 0.7757 | 0.8807 |
| No log | 0.6037 | 230 | 0.8235 | 0.3210 | 0.8235 | 0.9075 |
| No log | 0.6089 | 232 | 1.2346 | 0.1444 | 1.2346 | 1.1111 |
| No log | 0.6142 | 234 | 1.7010 | 0.0 | 1.7010 | 1.3042 |
| No log | 0.6194 | 236 | 1.6649 | 0.0 | 1.6649 | 1.2903 |
| No log | 0.6247 | 238 | 1.1486 | 0.0106 | 1.1486 | 1.0717 |
| No log | 0.6299 | 240 | 0.7860 | 0.3419 | 0.7860 | 0.8866 |
| No log | 0.6352 | 242 | 0.8148 | 0.3419 | 0.8148 | 0.9027 |
| No log | 0.6404 | 244 | 1.1538 | 0.1444 | 1.1538 | 1.0742 |
| No log | 0.6457 | 246 | 1.0500 | 0.2924 | 1.0500 | 1.0247 |
| No log | 0.6509 | 248 | 1.2892 | 0.1444 | 1.2892 | 1.1354 |
| No log | 0.6562 | 250 | 1.4334 | 0.1444 | 1.4334 | 1.1972 |
| No log | 0.6614 | 252 | 1.3731 | 0.1444 | 1.3731 | 1.1718 |
| No log | 0.6667 | 254 | 1.2068 | 0.1444 | 1.2068 | 1.0986 |
| No log | 0.6719 | 256 | 0.9498 | 0.2924 | 0.9498 | 0.9746 |
| No log | 0.6772 | 258 | 1.1941 | 0.1444 | 1.1941 | 1.0927 |
| No log | 0.6824 | 260 | 1.7117 | 0.1000 | 1.7117 | 1.3083 |
| No log | 0.6877 | 262 | 1.9335 | 0.0873 | 1.9335 | 1.3905 |
| No log | 0.6929 | 264 | 1.3672 | 0.0106 | 1.3672 | 1.1693 |
| No log | 0.6982 | 266 | 0.8064 | 0.3210 | 0.8064 | 0.8980 |
| No log | 0.7034 | 268 | 0.7523 | 0.1270 | 0.7523 | 0.8674 |
| No log | 0.7087 | 270 | 0.9503 | 0.2924 | 0.9503 | 0.9748 |
| No log | 0.7139 | 272 | 1.2665 | 0.0106 | 1.2665 | 1.1254 |
| No log | 0.7192 | 274 | 1.1794 | 0.0106 | 1.1794 | 1.0860 |
| No log | 0.7244 | 276 | 0.9198 | 0.2924 | 0.9198 | 0.9591 |
| No log | 0.7297 | 278 | 0.8598 | 0.2924 | 0.8598 | 0.9272 |
| No log | 0.7349 | 280 | 1.0150 | 0.1444 | 1.0150 | 1.0075 |
| No log | 0.7402 | 282 | 1.4208 | 0.0 | 1.4208 | 1.1920 |
| No log | 0.7454 | 284 | 1.8097 | 0.0 | 1.8097 | 1.3453 |
| No log | 0.7507 | 286 | 1.7147 | 0.0 | 1.7147 | 1.3095 |
| No log | 0.7559 | 288 | 1.3247 | 0.0 | 1.3247 | 1.1510 |
| No log | 0.7612 | 290 | 0.9930 | 0.1444 | 0.9930 | 0.9965 |
| No log | 0.7664 | 292 | 1.0556 | 0.1444 | 1.0556 | 1.0274 |
| No log | 0.7717 | 294 | 1.1920 | 0.0106 | 1.1920 | 1.0918 |
| No log | 0.7769 | 296 | 1.1338 | 0.1444 | 1.1338 | 1.0648 |
| No log | 0.7822 | 298 | 1.1484 | 0.1444 | 1.1484 | 1.0716 |
| No log | 0.7874 | 300 | 1.4066 | 0.0106 | 1.4066 | 1.1860 |
| No log | 0.7927 | 302 | 1.5537 | 0.0106 | 1.5537 | 1.2465 |
| No log | 0.7979 | 304 | 1.5806 | 0.0106 | 1.5806 | 1.2572 |
| No log | 0.8031 | 306 | 1.2893 | 0.0106 | 1.2893 | 1.1355 |
| No log | 0.8084 | 308 | 1.2001 | 0.1444 | 1.2001 | 1.0955 |
| No log | 0.8136 | 310 | 0.9759 | 0.1444 | 0.9759 | 0.9879 |
| No log | 0.8189 | 312 | 1.0270 | 0.1444 | 1.0270 | 1.0134 |
| No log | 0.8241 | 314 | 1.4009 | 0.0 | 1.4009 | 1.1836 |
| No log | 0.8294 | 316 | 1.6518 | 0.0 | 1.6518 | 1.2852 |
| No log | 0.8346 | 318 | 1.8953 | 0.0 | 1.8953 | 1.3767 |
| No log | 0.8399 | 320 | 1.6007 | 0.0 | 1.6007 | 1.2652 |
| No log | 0.8451 | 322 | 1.0938 | 0.1444 | 1.0938 | 1.0459 |
| No log | 0.8504 | 324 | 1.0238 | 0.1444 | 1.0238 | 1.0118 |
| No log | 0.8556 | 326 | 1.3276 | 0.1444 | 1.3276 | 1.1522 |
| No log | 0.8609 | 328 | 1.3732 | 0.0106 | 1.3732 | 1.1718 |
| No log | 0.8661 | 330 | 1.3330 | 0.1444 | 1.3330 | 1.1545 |
| No log | 0.8714 | 332 | 1.0972 | 0.1444 | 1.0972 | 1.0475 |
| No log | 0.8766 | 334 | 1.2427 | 0.1444 | 1.2427 | 1.1148 |
| No log | 0.8819 | 336 | 1.5608 | 0.0 | 1.5608 | 1.2493 |
| No log | 0.8871 | 338 | 1.4975 | 0.0 | 1.4975 | 1.2237 |
| No log | 0.8924 | 340 | 1.2103 | 0.1444 | 1.2103 | 1.1002 |
| No log | 0.8976 | 342 | 0.9392 | 0.1444 | 0.9392 | 0.9691 |
| No log | 0.9029 | 344 | 0.9963 | 0.1444 | 0.9963 | 0.9982 |
| No log | 0.9081 | 346 | 1.3097 | 0.1444 | 1.3097 | 1.1444 |
| No log | 0.9134 | 348 | 1.4172 | 0.0 | 1.4172 | 1.1905 |
| No log | 0.9186 | 350 | 1.2153 | 0.1444 | 1.2153 | 1.1024 |
| No log | 0.9239 | 352 | 0.9300 | 0.1444 | 0.9300 | 0.9644 |
| No log | 0.9291 | 354 | 0.9042 | 0.1637 | 0.9042 | 0.9509 |
| No log | 0.9344 | 356 | 1.0288 | 0.1444 | 1.0288 | 1.0143 |
| No log | 0.9396 | 358 | 1.1006 | 0.1444 | 1.1006 | 1.0491 |
| No log | 0.9449 | 360 | 1.3180 | 0.1444 | 1.3180 | 1.1481 |
| No log | 0.9501 | 362 | 1.2985 | 0.1444 | 1.2985 | 1.1395 |
| No log | 0.9554 | 364 | 1.1472 | 0.1444 | 1.1472 | 1.0711 |
| No log | 0.9606 | 366 | 1.1917 | 0.1444 | 1.1917 | 1.0916 |
| No log | 0.9659 | 368 | 1.1457 | 0.1444 | 1.1457 | 1.0704 |
| No log | 0.9711 | 370 | 0.9949 | 0.1444 | 0.9949 | 0.9974 |
| No log | 0.9764 | 372 | 0.9734 | 0.1444 | 0.9734 | 0.9866 |
| No log | 0.9816 | 374 | 1.1985 | 0.1444 | 1.1985 | 1.0947 |
| No log | 0.9869 | 376 | 1.3699 | 0.0106 | 1.3699 | 1.1704 |
| No log | 0.9921 | 378 | 1.4171 | 0.0106 | 1.4171 | 1.1904 |
| No log | 0.9974 | 380 | 1.4769 | 0.0 | 1.4769 | 1.2153 |
| No log | 1.0026 | 382 | 1.2159 | 0.1444 | 1.2159 | 1.1027 |
| No log | 1.0079 | 384 | 0.9214 | 0.1444 | 0.9214 | 0.9599 |
| No log | 1.0131 | 386 | 0.9635 | 0.1444 | 0.9635 | 0.9816 |
| No log | 1.0184 | 388 | 1.0288 | 0.1444 | 1.0288 | 1.0143 |
| No log | 1.0236 | 390 | 1.0396 | 0.1444 | 1.0396 | 1.0196 |
| No log | 1.0289 | 392 | 1.1776 | 0.0106 | 1.1776 | 1.0852 |
| No log | 1.0341 | 394 | 1.3084 | 0.0106 | 1.3084 | 1.1438 |
| No log | 1.0394 | 396 | 1.1995 | 0.0106 | 1.1995 | 1.0952 |
| No log | 1.0446 | 398 | 1.0139 | 0.1444 | 1.0139 | 1.0069 |
| No log | 1.0499 | 400 | 1.0759 | 0.1444 | 1.0759 | 1.0373 |
| No log | 1.0551 | 402 | 1.1233 | 0.1444 | 1.1233 | 1.0598 |
| No log | 1.0604 | 404 | 1.0307 | 0.1444 | 1.0307 | 1.0152 |
| No log | 1.0656 | 406 | 0.9918 | 0.1444 | 0.9918 | 0.9959 |
| No log | 1.0709 | 408 | 0.9706 | 0.1444 | 0.9706 | 0.9852 |
| No log | 1.0761 | 410 | 0.8595 | 0.2092 | 0.8595 | 0.9271 |
| No log | 1.0814 | 412 | 0.8311 | 0.2092 | 0.8311 | 0.9116 |
| No log | 1.0866 | 414 | 0.9841 | 0.1444 | 0.9841 | 0.9920 |
| No log | 1.0919 | 416 | 1.5320 | 0.0106 | 1.5320 | 1.2377 |
| No log | 1.0971 | 418 | 1.8817 | 0.0 | 1.8817 | 1.3717 |
| No log | 1.1024 | 420 | 1.7101 | 0.0106 | 1.7101 | 1.3077 |
| No log | 1.1076 | 422 | 1.1085 | 0.1444 | 1.1085 | 1.0528 |
| No log | 1.1129 | 424 | 0.7554 | 0.2763 | 0.7554 | 0.8691 |
| No log | 1.1181 | 426 | 0.7295 | 0.3077 | 0.7295 | 0.8541 |
| No log | 1.1234 | 428 | 0.8579 | 0.3210 | 0.8579 | 0.9262 |
| No log | 1.1286 | 430 | 1.3197 | 0.1444 | 1.3197 | 1.1488 |
| No log | 1.1339 | 432 | 1.4374 | 0.0 | 1.4374 | 1.1989 |
| No log | 1.1391 | 434 | 1.2504 | 0.0106 | 1.2504 | 1.1182 |
| No log | 1.1444 | 436 | 0.9499 | 0.1444 | 0.9499 | 0.9746 |
| No log | 1.1496 | 438 | 0.8274 | 0.3529 | 0.8274 | 0.9096 |
| No log | 1.1549 | 440 | 0.8259 | 0.3529 | 0.8259 | 0.9088 |
| No log | 1.1601 | 442 | 0.9663 | 0.1444 | 0.9663 | 0.9830 |
| No log | 1.1654 | 444 | 1.2851 | 0.0106 | 1.2851 | 1.1336 |
| No log | 1.1706 | 446 | 1.4290 | 0.0106 | 1.4290 | 1.1954 |
| No log | 1.1759 | 448 | 1.2178 | 0.1444 | 1.2178 | 1.1035 |
| No log | 1.1811 | 450 | 0.9275 | 0.1444 | 0.9275 | 0.9630 |
| No log | 1.1864 | 452 | 1.0018 | 0.1444 | 1.0018 | 1.0009 |
| No log | 1.1916 | 454 | 1.3210 | 0.0106 | 1.3210 | 1.1493 |
| No log | 1.1969 | 456 | 1.4898 | 0.0106 | 1.4898 | 1.2206 |
| No log | 1.2021 | 458 | 1.6165 | 0.0 | 1.6165 | 1.2714 |
| No log | 1.2073 | 460 | 1.3235 | 0.1444 | 1.3235 | 1.1504 |
| No log | 1.2126 | 462 | 1.0257 | 0.1444 | 1.0257 | 1.0128 |
| No log | 1.2178 | 464 | 0.8501 | 0.2924 | 0.8501 | 0.9220 |
| No log | 1.2231 | 466 | 0.8815 | 0.2924 | 0.8815 | 0.9389 |
| No log | 1.2283 | 468 | 1.0237 | 0.1444 | 1.0237 | 1.0118 |
| No log | 1.2336 | 470 | 1.0212 | 0.1444 | 1.0212 | 1.0105 |
| No log | 1.2388 | 472 | 1.0273 | 0.1444 | 1.0273 | 1.0135 |
| No log | 1.2441 | 474 | 0.8989 | 0.2924 | 0.8989 | 0.9481 |
| No log | 1.2493 | 476 | 0.9502 | 0.1444 | 0.9502 | 0.9748 |
| No log | 1.2546 | 478 | 1.2638 | 0.1444 | 1.2638 | 1.1242 |
| No log | 1.2598 | 480 | 1.3562 | 0.0106 | 1.3562 | 1.1646 |
| No log | 1.2651 | 482 | 1.1932 | 0.1444 | 1.1932 | 1.0924 |
| No log | 1.2703 | 484 | 0.9404 | 0.1444 | 0.9404 | 0.9698 |
| No log | 1.2756 | 486 | 0.8977 | 0.2924 | 0.8977 | 0.9475 |
| No log | 1.2808 | 488 | 0.9714 | 0.1444 | 0.9714 | 0.9856 |
| No log | 1.2861 | 490 | 0.9746 | 0.1444 | 0.9746 | 0.9872 |
| No log | 1.2913 | 492 | 1.1098 | 0.1444 | 1.1098 | 1.0535 |
| No log | 1.2966 | 494 | 1.2008 | 0.1444 | 1.2008 | 1.0958 |
| No log | 1.3018 | 496 | 1.0760 | 0.1444 | 1.0760 | 1.0373 |
| No log | 1.3071 | 498 | 0.8973 | 0.2924 | 0.8973 | 0.9473 |
| 0.3889 | 1.3123 | 500 | 0.8979 | 0.2924 | 0.8979 | 0.9476 |
| 0.3889 | 1.3176 | 502 | 1.0951 | 0.1444 | 1.0951 | 1.0465 |
| 0.3889 | 1.3228 | 504 | 1.2992 | 0.1444 | 1.2992 | 1.1398 |
| 0.3889 | 1.3281 | 506 | 1.3158 | 0.1444 | 1.3158 | 1.1471 |
| 0.3889 | 1.3333 | 508 | 1.2937 | 0.1444 | 1.2937 | 1.1374 |
| 0.3889 | 1.3386 | 510 | 1.1436 | 0.1444 | 1.1436 | 1.0694 |
| 0.3889 | 1.3438 | 512 | 0.9966 | 0.1444 | 0.9966 | 0.9983 |
| 0.3889 | 1.3491 | 514 | 1.0585 | 0.1444 | 1.0585 | 1.0288 |
| 0.3889 | 1.3543 | 516 | 1.0489 | 0.1444 | 1.0489 | 1.0242 |
| 0.3889 | 1.3596 | 518 | 0.9591 | 0.1444 | 0.9591 | 0.9793 |
| 0.3889 | 1.3648 | 520 | 0.8419 | 0.3529 | 0.8419 | 0.9175 |
| 0.3889 | 1.3701 | 522 | 0.9081 | 0.1444 | 0.9081 | 0.9529 |
| 0.3889 | 1.3753 | 524 | 1.1393 | 0.1444 | 1.1393 | 1.0674 |
| 0.3889 | 1.3806 | 526 | 1.2929 | 0.1444 | 1.2929 | 1.1370 |
| 0.3889 | 1.3858 | 528 | 1.1077 | 0.1444 | 1.1077 | 1.0525 |
| 0.3889 | 1.3911 | 530 | 0.9450 | 0.1451 | 0.9450 | 0.9721 |
| 0.3889 | 1.3963 | 532 | 0.9360 | 0.1451 | 0.9360 | 0.9675 |
| 0.3889 | 1.4016 | 534 | 1.1582 | 0.1444 | 1.1582 | 1.0762 |
| 0.3889 | 1.4068 | 536 | 1.2938 | 0.1444 | 1.2938 | 1.1374 |
| 0.3889 | 1.4121 | 538 | 1.2143 | 0.1444 | 1.2143 | 1.1020 |
| 0.3889 | 1.4173 | 540 | 1.0015 | 0.1444 | 1.0015 | 1.0008 |
| 0.3889 | 1.4226 | 542 | 0.8021 | 0.3373 | 0.8021 | 0.8956 |
| 0.3889 | 1.4278 | 544 | 0.7728 | 0.5096 | 0.7728 | 0.8791 |
| 0.3889 | 1.4331 | 546 | 0.8942 | 0.1444 | 0.8942 | 0.9456 |
| 0.3889 | 1.4383 | 548 | 1.1181 | 0.1444 | 1.1181 | 1.0574 |
| 0.3889 | 1.4436 | 550 | 1.1124 | 0.1444 | 1.1124 | 1.0547 |
| 0.3889 | 1.4488 | 552 | 0.9010 | 0.1444 | 0.9010 | 0.9492 |
| 0.3889 | 1.4541 | 554 | 0.7398 | 0.5417 | 0.7398 | 0.8601 |
| 0.3889 | 1.4593 | 556 | 0.7783 | 0.3210 | 0.7783 | 0.8822 |
| 0.3889 | 1.4646 | 558 | 0.8784 | 0.2924 | 0.8784 | 0.9372 |
| 0.3889 | 1.4698 | 560 | 1.0635 | 0.1444 | 1.0635 | 1.0312 |
| 0.3889 | 1.4751 | 562 | 1.1227 | 0.1444 | 1.1227 | 1.0596 |
| 0.3889 | 1.4803 | 564 | 0.9576 | 0.1444 | 0.9576 | 0.9786 |
| 0.3889 | 1.4856 | 566 | 0.8681 | 0.2924 | 0.8681 | 0.9317 |
| 0.3889 | 1.4908 | 568 | 0.8302 | 0.3210 | 0.8302 | 0.9112 |
| 0.3889 | 1.4961 | 570 | 0.9554 | 0.1444 | 0.9554 | 0.9774 |
| 0.3889 | 1.5013 | 572 | 1.1533 | 0.1444 | 1.1533 | 1.0739 |
| 0.3889 | 1.5066 | 574 | 1.1506 | 0.1444 | 1.1506 | 1.0727 |
| 0.3889 | 1.5118 | 576 | 0.9412 | 0.1444 | 0.9412 | 0.9702 |
| 0.3889 | 1.5171 | 578 | 0.8597 | 0.3210 | 0.8597 | 0.9272 |
| 0.3889 | 1.5223 | 580 | 0.9259 | 0.1444 | 0.9259 | 0.9622 |
| 0.3889 | 1.5276 | 582 | 1.0626 | 0.1444 | 1.0626 | 1.0308 |
| 0.3889 | 1.5328 | 584 | 1.1112 | 0.1444 | 1.1112 | 1.0541 |
| 0.3889 | 1.5381 | 586 | 0.9573 | 0.1444 | 0.9573 | 0.9784 |
| 0.3889 | 1.5433 | 588 | 0.9422 | 0.1637 | 0.9422 | 0.9707 |
| 0.3889 | 1.5486 | 590 | 1.0093 | 0.1444 | 1.0093 | 1.0046 |
| 0.3889 | 1.5538 | 592 | 0.9733 | 0.1637 | 0.9733 | 0.9866 |
| 0.3889 | 1.5591 | 594 | 1.0891 | 0.1444 | 1.0891 | 1.0436 |
| 0.3889 | 1.5643 | 596 | 1.1793 | 0.1444 | 1.1793 | 1.0860 |
| 0.3889 | 1.5696 | 598 | 1.0361 | 0.1637 | 1.0361 | 1.0179 |
| 0.3889 | 1.5748 | 600 | 0.9821 | 0.1637 | 0.9821 | 0.9910 |
| 0.3889 | 1.5801 | 602 | 0.9681 | 0.1637 | 0.9681 | 0.9839 |
| 0.3889 | 1.5853 | 604 | 0.9969 | 0.1444 | 0.9969 | 0.9984 |
| 0.3889 | 1.5906 | 606 | 1.0051 | 0.1444 | 1.0051 | 1.0025 |
| 0.3889 | 1.5958 | 608 | 0.9398 | 0.1444 | 0.9398 | 0.9695 |
| 0.3889 | 1.6010 | 610 | 0.9300 | 0.1444 | 0.9300 | 0.9644 |
| 0.3889 | 1.6063 | 612 | 0.9529 | 0.1444 | 0.9529 | 0.9762 |
| 0.3889 | 1.6115 | 614 | 1.1260 | 0.1444 | 1.1260 | 1.0611 |
| 0.3889 | 1.6168 | 616 | 1.4263 | 0.0106 | 1.4263 | 1.1943 |
| 0.3889 | 1.6220 | 618 | 1.4007 | 0.1444 | 1.4007 | 1.1835 |
| 0.3889 | 1.6273 | 620 | 1.1734 | 0.1444 | 1.1734 | 1.0832 |
| 0.3889 | 1.6325 | 622 | 0.8234 | 0.3086 | 0.8234 | 0.9074 |
| 0.3889 | 1.6378 | 624 | 0.7303 | 0.4211 | 0.7303 | 0.8546 |
| 0.3889 | 1.6430 | 626 | 0.7481 | 0.2878 | 0.7481 | 0.8649 |
| 0.3889 | 1.6483 | 628 | 0.8600 | 0.3210 | 0.8600 | 0.9273 |
| 0.3889 | 1.6535 | 630 | 1.0495 | 0.1444 | 1.0495 | 1.0244 |
| 0.3889 | 1.6588 | 632 | 1.0508 | 0.1444 | 1.0508 | 1.0251 |
| 0.3889 | 1.6640 | 634 | 0.9574 | 0.1444 | 0.9574 | 0.9785 |
| 0.3889 | 1.6693 | 636 | 0.8377 | 0.3210 | 0.8377 | 0.9153 |
| 0.3889 | 1.6745 | 638 | 0.7107 | 0.5096 | 0.7107 | 0.8430 |
| 0.3889 | 1.6798 | 640 | 0.7768 | 0.3210 | 0.7768 | 0.8813 |
| 0.3889 | 1.6850 | 642 | 1.0020 | 0.1444 | 1.0020 | 1.0010 |
| 0.3889 | 1.6903 | 644 | 1.0488 | 0.1444 | 1.0488 | 1.0241 |
| 0.3889 | 1.6955 | 646 | 0.9881 | 0.1444 | 0.9881 | 0.9940 |
| 0.3889 | 1.7008 | 648 | 0.9907 | 0.1444 | 0.9907 | 0.9953 |
| 0.3889 | 1.7060 | 650 | 0.8734 | 0.1637 | 0.8734 | 0.9345 |
| 0.3889 | 1.7113 | 652 | 0.7851 | 0.3210 | 0.7851 | 0.8861 |
| 0.3889 | 1.7165 | 654 | 0.8964 | 0.1444 | 0.8964 | 0.9468 |
| 0.3889 | 1.7218 | 656 | 1.0746 | 0.1444 | 1.0746 | 1.0366 |
| 0.3889 | 1.7270 | 658 | 1.2271 | 0.1444 | 1.2271 | 1.1077 |
| 0.3889 | 1.7323 | 660 | 1.1701 | 0.1444 | 1.1701 | 1.0817 |
| 0.3889 | 1.7375 | 662 | 0.9293 | 0.1444 | 0.9293 | 0.9640 |
| 0.3889 | 1.7428 | 664 | 0.8319 | 0.1637 | 0.8319 | 0.9121 |
| 0.3889 | 1.7480 | 666 | 0.8159 | 0.1637 | 0.8159 | 0.9033 |
| 0.3889 | 1.7533 | 668 | 0.9447 | 0.1637 | 0.9447 | 0.9720 |
| 0.3889 | 1.7585 | 670 | 1.1645 | 0.1444 | 1.1645 | 1.0791 |
| 0.3889 | 1.7638 | 672 | 1.2537 | 0.1444 | 1.2537 | 1.1197 |
| 0.3889 | 1.7690 | 674 | 1.1733 | 0.1444 | 1.1733 | 1.0832 |
| 0.3889 | 1.7743 | 676 | 1.0815 | 0.1444 | 1.0815 | 1.0400 |
| 0.3889 | 1.7795 | 678 | 0.9240 | 0.2924 | 0.9240 | 0.9613 |
| 0.3889 | 1.7848 | 680 | 0.9271 | 0.3210 | 0.9271 | 0.9628 |
| 0.3889 | 1.7900 | 682 | 0.8811 | 0.3210 | 0.8811 | 0.9387 |
| 0.3889 | 1.7953 | 684 | 0.9125 | 0.3210 | 0.9125 | 0.9552 |
| 0.3889 | 1.8005 | 686 | 1.0291 | 0.1444 | 1.0291 | 1.0145 |
| 0.3889 | 1.8058 | 688 | 1.0551 | 0.1444 | 1.0551 | 1.0272 |
| 0.3889 | 1.8110 | 690 | 1.1908 | 0.1444 | 1.1908 | 1.0913 |
| 0.3889 | 1.8163 | 692 | 1.1869 | 0.1444 | 1.1869 | 1.0895 |
| 0.3889 | 1.8215 | 694 | 1.4308 | 0.0106 | 1.4308 | 1.1962 |
| 0.3889 | 1.8268 | 696 | 1.4412 | 0.0106 | 1.4412 | 1.2005 |
| 0.3889 | 1.8320 | 698 | 1.1571 | 0.1444 | 1.1571 | 1.0757 |
| 0.3889 | 1.8373 | 700 | 1.0250 | 0.1444 | 1.0250 | 1.0124 |
| 0.3889 | 1.8425 | 702 | 0.8913 | 0.3210 | 0.8913 | 0.9441 |
| 0.3889 | 1.8478 | 704 | 0.8880 | 0.3210 | 0.8880 | 0.9423 |
| 0.3889 | 1.8530 | 706 | 0.8046 | 0.4296 | 0.8046 | 0.8970 |
| 0.3889 | 1.8583 | 708 | 0.8025 | 0.4296 | 0.8025 | 0.8958 |
| 0.3889 | 1.8635 | 710 | 0.9379 | 0.2924 | 0.9379 | 0.9685 |
| 0.3889 | 1.8688 | 712 | 1.0131 | 0.1444 | 1.0131 | 1.0065 |
| 0.3889 | 1.8740 | 714 | 0.9086 | 0.2924 | 0.9086 | 0.9532 |
| 0.3889 | 1.8793 | 716 | 0.7839 | 0.2092 | 0.7839 | 0.8854 |
| 0.3889 | 1.8845 | 718 | 0.8552 | 0.3210 | 0.8552 | 0.9248 |
| 0.3889 | 1.8898 | 720 | 1.1461 | 0.1444 | 1.1461 | 1.0705 |
| 0.3889 | 1.8950 | 722 | 1.3771 | 0.1444 | 1.3771 | 1.1735 |
| 0.3889 | 1.9003 | 724 | 1.4631 | 0.0106 | 1.4631 | 1.2096 |
| 0.3889 | 1.9055 | 726 | 1.2432 | 0.0106 | 1.2432 | 1.1150 |
| 0.3889 | 1.9108 | 728 | 0.9728 | 0.2924 | 0.9728 | 0.9863 |
| 0.3889 | 1.9160 | 730 | 0.7411 | 0.1270 | 0.7411 | 0.8609 |
| 0.3889 | 1.9213 | 732 | 0.6861 | 0.0 | 0.6861 | 0.8283 |
| 0.3889 | 1.9265 | 734 | 0.6925 | 0.0 | 0.6925 | 0.8322 |
| 0.3889 | 1.9318 | 736 | 0.7731 | 0.2361 | 0.7731 | 0.8793 |
| 0.3889 | 1.9370 | 738 | 0.9625 | 0.2924 | 0.9625 | 0.9811 |
| 0.3889 | 1.9423 | 740 | 1.2184 | 0.1444 | 1.2184 | 1.1038 |
| 0.3889 | 1.9475 | 742 | 1.2194 | 0.1444 | 1.2194 | 1.1043 |
| 0.3889 | 1.9528 | 744 | 1.0165 | 0.2924 | 1.0165 | 1.0082 |
| 0.3889 | 1.9580 | 746 | 0.7954 | 0.3210 | 0.7954 | 0.8919 |
| 0.3889 | 1.9633 | 748 | 0.7552 | 0.3373 | 0.7552 | 0.8690 |
| 0.3889 | 1.9685 | 750 | 0.8507 | 0.3210 | 0.8507 | 0.9223 |
| 0.3889 | 1.9738 | 752 | 1.1020 | 0.1444 | 1.1020 | 1.0498 |
| 0.3889 | 1.9790 | 754 | 1.2714 | 0.1444 | 1.2714 | 1.1275 |
| 0.3889 | 1.9843 | 756 | 1.2609 | 0.1444 | 1.2609 | 1.1229 |
| 0.3889 | 1.9895 | 758 | 1.1044 | 0.1444 | 1.1044 | 1.0509 |
| 0.3889 | 1.9948 | 760 | 0.8672 | 0.3210 | 0.8672 | 0.9312 |
| 0.3889 | 2.0 | 762 | 0.7351 | 0.2361 | 0.7351 | 0.8574 |
| 0.3889 | 2.0052 | 764 | 0.7006 | 0.4460 | 0.7006 | 0.8370 |
| 0.3889 | 2.0105 | 766 | 0.7214 | 0.4296 | 0.7214 | 0.8493 |
| 0.3889 | 2.0157 | 768 | 0.8199 | 0.3529 | 0.8199 | 0.9055 |
| 0.3889 | 2.0210 | 770 | 0.9252 | 0.3210 | 0.9252 | 0.9619 |
| 0.3889 | 2.0262 | 772 | 0.8760 | 0.3210 | 0.8760 | 0.9360 |
| 0.3889 | 2.0315 | 774 | 0.8073 | 0.2361 | 0.8073 | 0.8985 |
| 0.3889 | 2.0367 | 776 | 0.7210 | 0.1538 | 0.7210 | 0.8491 |
| 0.3889 | 2.0420 | 778 | 0.6944 | 0.1791 | 0.6944 | 0.8333 |
| 0.3889 | 2.0472 | 780 | 0.7051 | 0.3077 | 0.7051 | 0.8397 |
| 0.3889 | 2.0525 | 782 | 0.7485 | 0.1295 | 0.7485 | 0.8652 |
| 0.3889 | 2.0577 | 784 | 0.9524 | 0.2924 | 0.9524 | 0.9759 |
| 0.3889 | 2.0630 | 786 | 1.1063 | 0.1444 | 1.1063 | 1.0518 |
| 0.3889 | 2.0682 | 788 | 1.0631 | 0.1444 | 1.0631 | 1.0311 |
| 0.3889 | 2.0735 | 790 | 0.9235 | 0.2924 | 0.9235 | 0.9610 |
| 0.3889 | 2.0787 | 792 | 0.8927 | 0.3086 | 0.8927 | 0.9448 |
| 0.3889 | 2.0840 | 794 | 0.8392 | 0.2293 | 0.8392 | 0.9161 |
| 0.3889 | 2.0892 | 796 | 0.9106 | 0.2924 | 0.9106 | 0.9542 |
| 0.3889 | 2.0945 | 798 | 0.9592 | 0.2924 | 0.9592 | 0.9794 |
| 0.3889 | 2.0997 | 800 | 0.9618 | 0.2924 | 0.9618 | 0.9807 |
| 0.3889 | 2.1050 | 802 | 0.8451 | 0.2361 | 0.8451 | 0.9193 |
| 0.3889 | 2.1102 | 804 | 0.7548 | 0.1538 | 0.7548 | 0.8688 |
| 0.3889 | 2.1155 | 806 | 0.7482 | 0.1538 | 0.7482 | 0.8650 |
| 0.3889 | 2.1207 | 808 | 0.8002 | 0.2293 | 0.8002 | 0.8946 |
| 0.3889 | 2.1260 | 810 | 1.0174 | 0.1444 | 1.0174 | 1.0086 |
| 0.3889 | 2.1312 | 812 | 1.1848 | 0.1444 | 1.1848 | 1.0885 |
| 0.3889 | 2.1365 | 814 | 1.1368 | 0.1444 | 1.1368 | 1.0662 |
| 0.3889 | 2.1417 | 816 | 0.9530 | 0.1444 | 0.9530 | 0.9762 |
| 0.3889 | 2.1470 | 818 | 0.8122 | 0.2361 | 0.8122 | 0.9012 |
| 0.3889 | 2.1522 | 820 | 0.8028 | 0.2361 | 0.8028 | 0.8960 |
| 0.3889 | 2.1575 | 822 | 0.8130 | 0.2361 | 0.8130 | 0.9016 |
| 0.3889 | 2.1627 | 824 | 0.8269 | 0.2361 | 0.8269 | 0.9093 |
| 0.3889 | 2.1680 | 826 | 0.8581 | 0.2293 | 0.8581 | 0.9263 |
| 0.3889 | 2.1732 | 828 | 0.8973 | 0.3086 | 0.8973 | 0.9472 |
| 0.3889 | 2.1785 | 830 | 1.0175 | 0.1444 | 1.0175 | 1.0087 |
| 0.3889 | 2.1837 | 832 | 1.0534 | 0.1444 | 1.0534 | 1.0264 |
| 0.3889 | 2.1890 | 834 | 0.9244 | 0.2924 | 0.9244 | 0.9615 |
| 0.3889 | 2.1942 | 836 | 0.8071 | 0.2293 | 0.8071 | 0.8984 |
| 0.3889 | 2.1995 | 838 | 0.7532 | 0.3529 | 0.7532 | 0.8679 |
| 0.3889 | 2.2047 | 840 | 0.7584 | 0.2293 | 0.7584 | 0.8708 |
| 0.3889 | 2.2100 | 842 | 0.8044 | 0.3529 | 0.8044 | 0.8969 |
| 0.3889 | 2.2152 | 844 | 0.8537 | 0.3529 | 0.8537 | 0.9240 |
| 0.3889 | 2.2205 | 846 | 0.9336 | 0.1444 | 0.9336 | 0.9662 |
| 0.3889 | 2.2257 | 848 | 0.9108 | 0.1444 | 0.9108 | 0.9544 |
| 0.3889 | 2.2310 | 850 | 0.8116 | 0.3529 | 0.8117 | 0.9009 |
| 0.3889 | 2.2362 | 852 | 0.7186 | 0.2293 | 0.7186 | 0.8477 |
| 0.3889 | 2.2415 | 854 | 0.7250 | 0.2293 | 0.7250 | 0.8514 |
| 0.3889 | 2.2467 | 856 | 0.8222 | 0.3529 | 0.8222 | 0.9068 |
| 0.3889 | 2.2520 | 858 | 1.1052 | 0.1444 | 1.1052 | 1.0513 |
| 0.3889 | 2.2572 | 860 | 1.2621 | 0.1444 | 1.2621 | 1.1234 |
| 0.3889 | 2.2625 | 862 | 1.1672 | 0.1444 | 1.1672 | 1.0804 |
| 0.3889 | 2.2677 | 864 | 0.9148 | 0.2924 | 0.9148 | 0.9564 |
| 0.3889 | 2.2730 | 866 | 0.7918 | 0.2361 | 0.7918 | 0.8898 |
| 0.3889 | 2.2782 | 868 | 0.7312 | 0.2293 | 0.7312 | 0.8551 |
| 0.3889 | 2.2835 | 870 | 0.7741 | 0.3529 | 0.7741 | 0.8798 |
| 0.3889 | 2.2887 | 872 | 0.8891 | 0.2924 | 0.8891 | 0.9429 |
| 0.3889 | 2.2940 | 874 | 1.0303 | 0.1444 | 1.0303 | 1.0151 |
| 0.3889 | 2.2992 | 876 | 1.0050 | 0.1444 | 1.0050 | 1.0025 |
| 0.3889 | 2.3045 | 878 | 0.8744 | 0.3210 | 0.8744 | 0.9351 |
| 0.3889 | 2.3097 | 880 | 0.7860 | 0.3529 | 0.7860 | 0.8865 |
| 0.3889 | 2.3150 | 882 | 0.7332 | 0.3373 | 0.7332 | 0.8563 |
| 0.3889 | 2.3202 | 884 | 0.7138 | 0.5217 | 0.7138 | 0.8449 |
| 0.3889 | 2.3255 | 886 | 0.7238 | 0.2293 | 0.7238 | 0.8508 |
| 0.3889 | 2.3307 | 888 | 0.8203 | 0.3373 | 0.8203 | 0.9057 |
| 0.3889 | 2.3360 | 890 | 0.9743 | 0.1444 | 0.9743 | 0.9871 |
| 0.3889 | 2.3412 | 892 | 1.0418 | 0.1444 | 1.0418 | 1.0207 |
| 0.3889 | 2.3465 | 894 | 0.9269 | 0.1637 | 0.9269 | 0.9627 |
| 0.3889 | 2.3517 | 896 | 0.7862 | 0.3529 | 0.7862 | 0.8867 |
| 0.3889 | 2.3570 | 898 | 0.7438 | 0.2293 | 0.7438 | 0.8624 |
| 0.3889 | 2.3622 | 900 | 0.7907 | 0.3529 | 0.7907 | 0.8892 |
| 0.3889 | 2.3675 | 902 | 0.8283 | 0.3210 | 0.8283 | 0.9101 |
| 0.3889 | 2.3727 | 904 | 0.8704 | 0.3210 | 0.8704 | 0.9329 |
| 0.3889 | 2.3780 | 906 | 0.7928 | 0.3529 | 0.7928 | 0.8904 |
| 0.3889 | 2.3832 | 908 | 0.7618 | 0.2361 | 0.7618 | 0.8728 |
| 0.3889 | 2.3885 | 910 | 0.7648 | 0.2361 | 0.7648 | 0.8745 |
| 0.3889 | 2.3937 | 912 | 0.7963 | 0.3529 | 0.7963 | 0.8924 |
| 0.3889 | 2.3990 | 914 | 0.8714 | 0.3210 | 0.8714 | 0.9335 |
| 0.3889 | 2.4042 | 916 | 0.7926 | 0.3529 | 0.7926 | 0.8903 |
| 0.3889 | 2.4094 | 918 | 0.7796 | 0.3529 | 0.7796 | 0.8829 |
| 0.3889 | 2.4147 | 920 | 0.7395 | 0.2361 | 0.7395 | 0.8600 |
| 0.3889 | 2.4199 | 922 | 0.7660 | 0.2361 | 0.7660 | 0.8752 |
| 0.3889 | 2.4252 | 924 | 0.7979 | 0.3529 | 0.7979 | 0.8933 |
| 0.3889 | 2.4304 | 926 | 0.7401 | 0.2361 | 0.7401 | 0.8603 |
| 0.3889 | 2.4357 | 928 | 0.7184 | 0.2361 | 0.7184 | 0.8476 |
| 0.3889 | 2.4409 | 930 | 0.7262 | 0.2361 | 0.7262 | 0.8522 |
| 0.3889 | 2.4462 | 932 | 0.7646 | 0.2361 | 0.7646 | 0.8744 |
| 0.3889 | 2.4514 | 934 | 0.8370 | 0.3210 | 0.8370 | 0.9149 |
| 0.3889 | 2.4567 | 936 | 0.8038 | 0.2092 | 0.8038 | 0.8965 |
| 0.3889 | 2.4619 | 938 | 0.7863 | 0.2092 | 0.7863 | 0.8867 |
| 0.3889 | 2.4672 | 940 | 0.7969 | 0.2048 | 0.7969 | 0.8927 |
| 0.3889 | 2.4724 | 942 | 1.0116 | 0.1444 | 1.0116 | 1.0058 |
| 0.3889 | 2.4777 | 944 | 1.2470 | 0.1444 | 1.2470 | 1.1167 |
| 0.3889 | 2.4829 | 946 | 1.1478 | 0.1444 | 1.1478 | 1.0713 |
| 0.3889 | 2.4882 | 948 | 0.9689 | 0.1444 | 0.9689 | 0.9843 |
| 0.3889 | 2.4934 | 950 | 0.8443 | 0.2092 | 0.8443 | 0.9188 |
| 0.3889 | 2.4987 | 952 | 0.8995 | 0.3210 | 0.8995 | 0.9484 |
| 0.3889 | 2.5039 | 954 | 0.9837 | 0.1444 | 0.9837 | 0.9918 |
| 0.3889 | 2.5092 | 956 | 0.9129 | 0.2924 | 0.9129 | 0.9555 |
| 0.3889 | 2.5144 | 958 | 0.8466 | 0.2092 | 0.8466 | 0.9201 |
| 0.3889 | 2.5197 | 960 | 0.8629 | 0.2092 | 0.8629 | 0.9289 |
| 0.3889 | 2.5249 | 962 | 0.9626 | 0.1444 | 0.9626 | 0.9811 |
| 0.3889 | 2.5302 | 964 | 0.9761 | 0.1444 | 0.9761 | 0.9880 |
| 0.3889 | 2.5354 | 966 | 0.9024 | 0.3210 | 0.9024 | 0.9500 |
| 0.3889 | 2.5407 | 968 | 0.9433 | 0.1444 | 0.9433 | 0.9712 |
| 0.3889 | 2.5459 | 970 | 0.9129 | 0.3210 | 0.9129 | 0.9555 |
| 0.3889 | 2.5512 | 972 | 0.8383 | 0.2092 | 0.8383 | 0.9156 |
| 0.3889 | 2.5564 | 974 | 0.7286 | 0.2568 | 0.7286 | 0.8536 |
| 0.3889 | 2.5617 | 976 | 0.7256 | 0.2568 | 0.7256 | 0.8518 |
| 0.3889 | 2.5669 | 978 | 0.8553 | 0.3086 | 0.8553 | 0.9248 |
| 0.3889 | 2.5722 | 980 | 1.2594 | 0.1444 | 1.2594 | 1.1222 |
| 0.3889 | 2.5774 | 982 | 1.4789 | 0.0106 | 1.4789 | 1.2161 |
| 0.3889 | 2.5827 | 984 | 1.4294 | 0.0106 | 1.4294 | 1.1956 |
| 0.3889 | 2.5879 | 986 | 1.1745 | 0.1444 | 1.1745 | 1.0838 |
| 0.3889 | 2.5932 | 988 | 0.8377 | 0.3210 | 0.8377 | 0.9153 |
| 0.3889 | 2.5984 | 990 | 0.7497 | 0.2361 | 0.7497 | 0.8658 |
| 0.3889 | 2.6037 | 992 | 0.7706 | 0.2361 | 0.7706 | 0.8778 |
| 0.3889 | 2.6089 | 994 | 0.8922 | 0.3210 | 0.8922 | 0.9446 |
| 0.3889 | 2.6142 | 996 | 1.1003 | 0.1444 | 1.1003 | 1.0489 |
| 0.3889 | 2.6194 | 998 | 1.1434 | 0.1444 | 1.1434 | 1.0693 |
| 0.1313 | 2.6247 | 1000 | 1.0184 | 0.1637 | 1.0184 | 1.0092 |
| 0.1313 | 2.6299 | 1002 | 0.9176 | 0.3210 | 0.9176 | 0.9579 |
| 0.1313 | 2.6352 | 1004 | 0.9320 | 0.3210 | 0.9320 | 0.9654 |
| 0.1313 | 2.6404 | 1006 | 0.9873 | 0.3210 | 0.9873 | 0.9936 |
| 0.1313 | 2.6457 | 1008 | 1.0830 | 0.1444 | 1.0830 | 1.0407 |
| 0.1313 | 2.6509 | 1010 | 1.0833 | 0.1444 | 1.0833 | 1.0408 |
| 0.1313 | 2.6562 | 1012 | 0.9617 | 0.2924 | 0.9617 | 0.9807 |
| 0.1313 | 2.6614 | 1014 | 0.8060 | 0.2361 | 0.8060 | 0.8978 |
| 0.1313 | 2.6667 | 1016 | 0.7493 | 0.1538 | 0.7493 | 0.8656 |
| 0.1313 | 2.6719 | 1018 | 0.7350 | 0.1538 | 0.7350 | 0.8573 |
| 0.1313 | 2.6772 | 1020 | 0.7466 | -0.0476 | 0.7466 | 0.8641 |
| 0.1313 | 2.6824 | 1022 | 0.8196 | 0.3529 | 0.8196 | 0.9053 |
| 0.1313 | 2.6877 | 1024 | 1.0006 | 0.2924 | 1.0006 | 1.0003 |
| 0.1313 | 2.6929 | 1026 | 1.1077 | 0.1444 | 1.1077 | 1.0525 |
| 0.1313 | 2.6982 | 1028 | 1.0172 | 0.2924 | 1.0172 | 1.0086 |
| 0.1313 | 2.7034 | 1030 | 0.8443 | 0.3210 | 0.8443 | 0.9189 |
| 0.1313 | 2.7087 | 1032 | 0.7847 | 0.3529 | 0.7847 | 0.8859 |
| 0.1313 | 2.7139 | 1034 | 0.8286 | 0.3529 | 0.8286 | 0.9103 |
| 0.1313 | 2.7192 | 1036 | 0.8382 | 0.3210 | 0.8382 | 0.9155 |
| 0.1313 | 2.7244 | 1038 | 0.8303 | 0.3210 | 0.8303 | 0.9112 |
| 0.1313 | 2.7297 | 1040 | 0.8712 | 0.3210 | 0.8712 | 0.9334 |
| 0.1313 | 2.7349 | 1042 | 0.8490 | 0.3210 | 0.8490 | 0.9214 |
| 0.1313 | 2.7402 | 1044 | 0.7507 | 0.3529 | 0.7507 | 0.8664 |
| 0.1313 | 2.7454 | 1046 | 0.6927 | 0.2763 | 0.6927 | 0.8323 |
| 0.1313 | 2.7507 | 1048 | 0.7025 | 0.1295 | 0.7025 | 0.8381 |
| 0.1313 | 2.7559 | 1050 | 0.7807 | 0.3529 | 0.7807 | 0.8836 |
| 0.1313 | 2.7612 | 1052 | 1.0446 | 0.1444 | 1.0446 | 1.0221 |
| 0.1313 | 2.7664 | 1054 | 1.2503 | 0.1444 | 1.2503 | 1.1182 |
| 0.1313 | 2.7717 | 1056 | 1.2285 | 0.1444 | 1.2285 | 1.1084 |
| 0.1313 | 2.7769 | 1058 | 1.0470 | 0.1444 | 1.0470 | 1.0232 |
| 0.1313 | 2.7822 | 1060 | 0.8634 | 0.3210 | 0.8634 | 0.9292 |
| 0.1313 | 2.7874 | 1062 | 0.7487 | 0.2361 | 0.7487 | 0.8653 |
| 0.1313 | 2.7927 | 1064 | 0.7170 | -0.0185 | 0.7170 | 0.8467 |
| 0.1313 | 2.7979 | 1066 | 0.7183 | -0.0185 | 0.7183 | 0.8475 |
| 0.1313 | 2.8031 | 1068 | 0.7743 | 0.2361 | 0.7743 | 0.8799 |
| 0.1313 | 2.8084 | 1070 | 0.8529 | 0.3210 | 0.8529 | 0.9235 |
| 0.1313 | 2.8136 | 1072 | 0.8818 | 0.3210 | 0.8818 | 0.9390 |
| 0.1313 | 2.8189 | 1074 | 0.9494 | 0.1637 | 0.9494 | 0.9744 |
| 0.1313 | 2.8241 | 1076 | 0.9085 | 0.3210 | 0.9085 | 0.9531 |
| 0.1313 | 2.8294 | 1078 | 0.7996 | 0.3210 | 0.7996 | 0.8942 |
| 0.1313 | 2.8346 | 1080 | 0.7753 | 0.3529 | 0.7753 | 0.8805 |
| 0.1313 | 2.8399 | 1082 | 0.7567 | 0.2361 | 0.7567 | 0.8699 |
| 0.1313 | 2.8451 | 1084 | 0.8117 | 0.3210 | 0.8117 | 0.9010 |
| 0.1313 | 2.8504 | 1086 | 0.7710 | 0.3210 | 0.7710 | 0.8780 |
| 0.1313 | 2.8556 | 1088 | 0.7247 | 0.2361 | 0.7247 | 0.8513 |
| 0.1313 | 2.8609 | 1090 | 0.7210 | 0.2361 | 0.7210 | 0.8491 |
| 0.1313 | 2.8661 | 1092 | 0.7113 | 0.2667 | 0.7113 | 0.8434 |
| 0.1313 | 2.8714 | 1094 | 0.7734 | 0.3210 | 0.7734 | 0.8794 |
| 0.1313 | 2.8766 | 1096 | 0.8319 | 0.3210 | 0.8319 | 0.9121 |
| 0.1313 | 2.8819 | 1098 | 0.8152 | 0.3210 | 0.8152 | 0.9029 |
| 0.1313 | 2.8871 | 1100 | 0.7180 | 0.1270 | 0.7180 | 0.8474 |
| 0.1313 | 2.8924 | 1102 | 0.6746 | 0.0 | 0.6746 | 0.8213 |
| 0.1313 | 2.8976 | 1104 | 0.6747 | 0.0 | 0.6747 | 0.8214 |
| 0.1313 | 2.9029 | 1106 | 0.7232 | 0.1037 | 0.7232 | 0.8504 |
| 0.1313 | 2.9081 | 1108 | 0.8437 | 0.3210 | 0.8437 | 0.9185 |
| 0.1313 | 2.9134 | 1110 | 0.8978 | 0.1637 | 0.8978 | 0.9475 |
| 0.1313 | 2.9186 | 1112 | 0.8424 | 0.3210 | 0.8424 | 0.9178 |
| 0.1313 | 2.9239 | 1114 | 0.7156 | 0.4699 | 0.7156 | 0.8459 |
| 0.1313 | 2.9291 | 1116 | 0.6848 | 0.5075 | 0.6848 | 0.8275 |
| 0.1313 | 2.9344 | 1118 | 0.6804 | 0.5075 | 0.6804 | 0.8249 |
| 0.1313 | 2.9396 | 1120 | 0.7434 | 0.3529 | 0.7434 | 0.8622 |
| 0.1313 | 2.9449 | 1122 | 0.8946 | 0.1637 | 0.8946 | 0.9458 |
| 0.1313 | 2.9501 | 1124 | 0.8727 | 0.1637 | 0.8727 | 0.9342 |
| 0.1313 | 2.9554 | 1126 | 0.7659 | 0.3529 | 0.7659 | 0.8751 |
| 0.1313 | 2.9606 | 1128 | 0.6879 | 0.3889 | 0.6879 | 0.8294 |
| 0.1313 | 2.9659 | 1130 | 0.7001 | 0.1538 | 0.7001 | 0.8367 |
| 0.1313 | 2.9711 | 1132 | 0.7020 | 0.1538 | 0.7020 | 0.8379 |
| 0.1313 | 2.9764 | 1134 | 0.7129 | 0.1538 | 0.7129 | 0.8444 |
| 0.1313 | 2.9816 | 1136 | 0.7223 | 0.1295 | 0.7223 | 0.8499 |
| 0.1313 | 2.9869 | 1138 | 0.7775 | 0.3210 | 0.7775 | 0.8817 |
| 0.1313 | 2.9921 | 1140 | 0.8505 | 0.3210 | 0.8505 | 0.9222 |
| 0.1313 | 2.9974 | 1142 | 0.8354 | 0.3210 | 0.8354 | 0.9140 |
| 0.1313 | 3.0026 | 1144 | 0.8816 | 0.1637 | 0.8816 | 0.9389 |
| 0.1313 | 3.0079 | 1146 | 0.8145 | 0.2048 | 0.8145 | 0.9025 |
| 0.1313 | 3.0131 | 1148 | 0.8146 | 0.2048 | 0.8146 | 0.9026 |
| 0.1313 | 3.0184 | 1150 | 0.8190 | 0.2048 | 0.8190 | 0.9050 |
| 0.1313 | 3.0236 | 1152 | 0.8285 | 0.2048 | 0.8285 | 0.9102 |
| 0.1313 | 3.0289 | 1154 | 0.8222 | 0.2048 | 0.8222 | 0.9068 |
| 0.1313 | 3.0341 | 1156 | 0.8745 | 0.3210 | 0.8745 | 0.9352 |
| 0.1313 | 3.0394 | 1158 | 0.9385 | 0.1444 | 0.9385 | 0.9687 |
| 0.1313 | 3.0446 | 1160 | 0.9199 | 0.1444 | 0.9199 | 0.9591 |
| 0.1313 | 3.0499 | 1162 | 0.8314 | 0.2092 | 0.8314 | 0.9118 |
| 0.1313 | 3.0551 | 1164 | 0.8056 | 0.0833 | 0.8056 | 0.8976 |
| 0.1313 | 3.0604 | 1166 | 0.8486 | 0.2092 | 0.8486 | 0.9212 |
| 0.1313 | 3.0656 | 1168 | 0.9491 | 0.1444 | 0.9491 | 0.9742 |
| 0.1313 | 3.0709 | 1170 | 0.9783 | 0.1444 | 0.9783 | 0.9891 |
| 0.1313 | 3.0761 | 1172 | 0.8793 | 0.0494 | 0.8793 | 0.9377 |
| 0.1313 | 3.0814 | 1174 | 0.8173 | 0.2092 | 0.8173 | 0.9040 |
| 0.1313 | 3.0866 | 1176 | 0.7753 | 0.1295 | 0.7753 | 0.8805 |
| 0.1313 | 3.0919 | 1178 | 0.7589 | 0.5075 | 0.7589 | 0.8711 |
| 0.1313 | 3.0971 | 1180 | 0.7710 | 0.3231 | 0.7710 | 0.8781 |
| 0.1313 | 3.1024 | 1182 | 0.8615 | 0.0494 | 0.8615 | 0.9282 |
| 0.1313 | 3.1076 | 1184 | 0.8923 | 0.0494 | 0.8923 | 0.9446 |
| 0.1313 | 3.1129 | 1186 | 0.8195 | 0.2092 | 0.8195 | 0.9053 |
| 0.1313 | 3.1181 | 1188 | 0.7876 | 0.0833 | 0.7876 | 0.8874 |
| 0.1313 | 3.1234 | 1190 | 0.7468 | 0.3419 | 0.7468 | 0.8641 |
| 0.1313 | 3.1286 | 1192 | 0.7439 | 0.3419 | 0.7439 | 0.8625 |
| 0.1313 | 3.1339 | 1194 | 0.7840 | 0.0833 | 0.7840 | 0.8854 |
| 0.1313 | 3.1391 | 1196 | 0.8747 | 0.0494 | 0.8747 | 0.9353 |
| 0.1313 | 3.1444 | 1198 | 0.9294 | 0.1637 | 0.9294 | 0.9640 |
| 0.1313 | 3.1496 | 1200 | 0.8671 | 0.2092 | 0.8671 | 0.9312 |
| 0.1313 | 3.1549 | 1202 | 0.8465 | 0.2092 | 0.8465 | 0.9201 |
| 0.1313 | 3.1601 | 1204 | 0.8931 | 0.2092 | 0.8931 | 0.9451 |
| 0.1313 | 3.1654 | 1206 | 0.8434 | 0.2092 | 0.8434 | 0.9183 |
| 0.1313 | 3.1706 | 1208 | 0.7850 | 0.0892 | 0.7850 | 0.8860 |
| 0.1313 | 3.1759 | 1210 | 0.7938 | 0.0892 | 0.7938 | 0.8910 |
| 0.1313 | 3.1811 | 1212 | 0.8508 | 0.0833 | 0.8508 | 0.9224 |
| 0.1313 | 3.1864 | 1214 | 0.8567 | 0.0833 | 0.8567 | 0.9256 |
| 0.1313 | 3.1916 | 1216 | 0.8219 | 0.0892 | 0.8219 | 0.9066 |
| 0.1313 | 3.1969 | 1218 | 0.8423 | 0.0892 | 0.8423 | 0.9178 |
| 0.1313 | 3.2021 | 1220 | 0.9218 | 0.2048 | 0.9218 | 0.9601 |
| 0.1313 | 3.2073 | 1222 | 0.9122 | 0.2048 | 0.9122 | 0.9551 |
| 0.1313 | 3.2126 | 1224 | 0.9361 | 0.2048 | 0.9361 | 0.9675 |
| 0.1313 | 3.2178 | 1226 | 0.8950 | 0.2048 | 0.8950 | 0.9461 |
| 0.1313 | 3.2231 | 1228 | 0.8666 | 0.0892 | 0.8666 | 0.9309 |
| 0.1313 | 3.2283 | 1230 | 0.8257 | 0.0892 | 0.8257 | 0.9087 |
| 0.1313 | 3.2336 | 1232 | 0.8295 | 0.0892 | 0.8295 | 0.9107 |
| 0.1313 | 3.2388 | 1234 | 0.8338 | 0.0892 | 0.8338 | 0.9132 |
| 0.1313 | 3.2441 | 1236 | 0.8717 | 0.2092 | 0.8717 | 0.9337 |
| 0.1313 | 3.2493 | 1238 | 0.8773 | 0.2092 | 0.8773 | 0.9367 |
| 0.1313 | 3.2546 | 1240 | 0.9406 | 0.1637 | 0.9406 | 0.9698 |
| 0.1313 | 3.2598 | 1242 | 0.9364 | 0.1637 | 0.9364 | 0.9677 |
| 0.1313 | 3.2651 | 1244 | 0.8620 | 0.2048 | 0.8620 | 0.9284 |
| 0.1313 | 3.2703 | 1246 | 0.7980 | 0.0892 | 0.7980 | 0.8933 |
| 0.1313 | 3.2756 | 1248 | 0.8300 | 0.2048 | 0.8300 | 0.9110 |
| 0.1313 | 3.2808 | 1250 | 0.8144 | 0.2092 | 0.8144 | 0.9025 |
| 0.1313 | 3.2861 | 1252 | 0.8122 | 0.2092 | 0.8122 | 0.9012 |
| 0.1313 | 3.2913 | 1254 | 0.8008 | 0.2092 | 0.8008 | 0.8949 |
| 0.1313 | 3.2966 | 1256 | 0.8092 | 0.3210 | 0.8092 | 0.8995 |
| 0.1313 | 3.3018 | 1258 | 0.8135 | 0.3210 | 0.8135 | 0.9019 |
| 0.1313 | 3.3071 | 1260 | 0.7441 | -0.0342 | 0.7441 | 0.8626 |
| 0.1313 | 3.3123 | 1262 | 0.7169 | 0.0179 | 0.7169 | 0.8467 |
| 0.1313 | 3.3176 | 1264 | 0.7208 | 0.0179 | 0.7208 | 0.8490 |
| 0.1313 | 3.3228 | 1266 | 0.7558 | -0.0342 | 0.7558 | 0.8694 |
| 0.1313 | 3.3281 | 1268 | 0.8975 | 0.3210 | 0.8975 | 0.9474 |
| 0.1313 | 3.3333 | 1270 | 1.0004 | 0.1444 | 1.0004 | 1.0002 |
| 0.1313 | 3.3386 | 1272 | 0.9524 | 0.2924 | 0.9524 | 0.9759 |
| 0.1313 | 3.3438 | 1274 | 0.8949 | 0.3210 | 0.8949 | 0.9460 |
| 0.1313 | 3.3491 | 1276 | 0.8802 | 0.2092 | 0.8802 | 0.9382 |
| 0.1313 | 3.3543 | 1278 | 0.9025 | 0.3210 | 0.9025 | 0.9500 |
| 0.1313 | 3.3596 | 1280 | 0.9010 | 0.3210 | 0.9010 | 0.9492 |
| 0.1313 | 3.3648 | 1282 | 0.8703 | 0.3210 | 0.8703 | 0.9329 |
| 0.1313 | 3.3701 | 1284 | 0.8009 | 0.0833 | 0.8009 | 0.8949 |
| 0.1313 | 3.3753 | 1286 | 0.7777 | 0.1081 | 0.7777 | 0.8819 |
| 0.1313 | 3.3806 | 1288 | 0.8209 | 0.2092 | 0.8209 | 0.9060 |
| 0.1313 | 3.3858 | 1290 | 0.8870 | 0.3210 | 0.8870 | 0.9418 |
| 0.1313 | 3.3911 | 1292 | 0.8717 | 0.3210 | 0.8717 | 0.9336 |
| 0.1313 | 3.3963 | 1294 | 0.7832 | 0.0833 | 0.7832 | 0.8850 |
| 0.1313 | 3.4016 | 1296 | 0.7657 | 0.2568 | 0.7657 | 0.8750 |
| 0.1313 | 3.4068 | 1298 | 0.7904 | 0.0892 | 0.7904 | 0.8890 |
| 0.1313 | 3.4121 | 1300 | 0.8684 | 0.3210 | 0.8684 | 0.9319 |
| 0.1313 | 3.4173 | 1302 | 0.9244 | 0.1637 | 0.9244 | 0.9614 |
| 0.1313 | 3.4226 | 1304 | 0.8495 | 0.3210 | 0.8495 | 0.9217 |
| 0.1313 | 3.4278 | 1306 | 0.7822 | 0.0892 | 0.7822 | 0.8844 |
| 0.1313 | 3.4331 | 1308 | 0.7575 | 0.4615 | 0.7575 | 0.8704 |
| 0.1313 | 3.4383 | 1310 | 0.7622 | 0.4211 | 0.7622 | 0.8730 |
| 0.1313 | 3.4436 | 1312 | 0.8096 | 0.2048 | 0.8096 | 0.8998 |
| 0.1313 | 3.4488 | 1314 | 0.8917 | 0.3210 | 0.8917 | 0.9443 |
| 0.1313 | 3.4541 | 1316 | 0.9005 | 0.3210 | 0.9005 | 0.9489 |
| 0.1313 | 3.4593 | 1318 | 0.8582 | 0.3210 | 0.8582 | 0.9264 |
| 0.1313 | 3.4646 | 1320 | 0.7586 | 0.0833 | 0.7586 | 0.8710 |
| 0.1313 | 3.4698 | 1322 | 0.7302 | 0.3231 | 0.7302 | 0.8545 |
| 0.1313 | 3.4751 | 1324 | 0.7524 | 0.2667 | 0.7524 | 0.8674 |
| 0.1313 | 3.4803 | 1326 | 0.8157 | 0.3210 | 0.8157 | 0.9031 |
| 0.1313 | 3.4856 | 1328 | 0.7970 | 0.0833 | 0.7970 | 0.8927 |
| 0.1313 | 3.4908 | 1330 | 0.7411 | 0.2878 | 0.7411 | 0.8609 |
| 0.1313 | 3.4961 | 1332 | 0.7435 | 0.1081 | 0.7435 | 0.8622 |
| 0.1313 | 3.5013 | 1334 | 0.7960 | 0.0833 | 0.7960 | 0.8922 |
| 0.1313 | 3.5066 | 1336 | 0.7957 | 0.0833 | 0.7957 | 0.8920 |
| 0.1313 | 3.5118 | 1338 | 0.7541 | 0.1081 | 0.7541 | 0.8684 |
| 0.1313 | 3.5171 | 1340 | 0.7597 | 0.1081 | 0.7597 | 0.8716 |
| 0.1313 | 3.5223 | 1342 | 0.8164 | 0.0833 | 0.8164 | 0.9035 |
| 0.1313 | 3.5276 | 1344 | 0.8142 | 0.2092 | 0.8142 | 0.9023 |
| 0.1313 | 3.5328 | 1346 | 0.8123 | 0.2092 | 0.8123 | 0.9013 |
| 0.1313 | 3.5381 | 1348 | 0.8219 | 0.2092 | 0.8219 | 0.9066 |
| 0.1313 | 3.5433 | 1350 | 0.7637 | 0.1081 | 0.7637 | 0.8739 |
| 0.1313 | 3.5486 | 1352 | 0.7536 | 0.4615 | 0.7536 | 0.8681 |
| 0.1313 | 3.5538 | 1354 | 0.7711 | 0.1081 | 0.7711 | 0.8781 |
| 0.1313 | 3.5591 | 1356 | 0.8708 | 0.3210 | 0.8708 | 0.9331 |
| 0.1313 | 3.5643 | 1358 | 0.9693 | 0.3210 | 0.9693 | 0.9846 |
| 0.1313 | 3.5696 | 1360 | 0.9401 | 0.3210 | 0.9401 | 0.9696 |
| 0.1313 | 3.5748 | 1362 | 0.8285 | 0.3210 | 0.8285 | 0.9102 |
| 0.1313 | 3.5801 | 1364 | 0.7625 | 0.1037 | 0.7625 | 0.8732 |
| 0.1313 | 3.5853 | 1366 | 0.7172 | 0.0 | 0.7172 | 0.8469 |
| 0.1313 | 3.5906 | 1368 | 0.7153 | 0.0 | 0.7153 | 0.8458 |
| 0.1313 | 3.5958 | 1370 | 0.7431 | 0.1270 | 0.7431 | 0.8620 |
| 0.1313 | 3.6010 | 1372 | 0.8359 | 0.3210 | 0.8359 | 0.9143 |
| 0.1313 | 3.6063 | 1374 | 0.8879 | 0.3210 | 0.8879 | 0.9423 |
| 0.1313 | 3.6115 | 1376 | 0.8424 | 0.2092 | 0.8424 | 0.9178 |
| 0.1313 | 3.6168 | 1378 | 0.8183 | 0.0892 | 0.8183 | 0.9046 |
| 0.1313 | 3.6220 | 1380 | 0.8255 | 0.0892 | 0.8255 | 0.9086 |
| 0.1313 | 3.6273 | 1382 | 0.8335 | 0.0892 | 0.8335 | 0.9130 |
| 0.1313 | 3.6325 | 1384 | 0.8389 | 0.0892 | 0.8389 | 0.9159 |
| 0.1313 | 3.6378 | 1386 | 0.8084 | 0.1081 | 0.8084 | 0.8991 |
| 0.1313 | 3.6430 | 1388 | 0.8359 | 0.2048 | 0.8359 | 0.9143 |
| 0.1313 | 3.6483 | 1390 | 0.9175 | 0.3210 | 0.9175 | 0.9579 |
| 0.1313 | 3.6535 | 1392 | 0.9421 | 0.3210 | 0.9421 | 0.9706 |
| 0.1313 | 3.6588 | 1394 | 0.8712 | 0.2092 | 0.8712 | 0.9334 |
| 0.1313 | 3.6640 | 1396 | 0.8107 | 0.1081 | 0.8107 | 0.9004 |
| 0.1313 | 3.6693 | 1398 | 0.7971 | 0.0 | 0.7971 | 0.8928 |
| 0.1313 | 3.6745 | 1400 | 0.8095 | 0.0 | 0.8095 | 0.8997 |
| 0.1313 | 3.6798 | 1402 | 0.8677 | 0.2048 | 0.8677 | 0.9315 |
| 0.1313 | 3.6850 | 1404 | 0.9135 | 0.2048 | 0.9135 | 0.9558 |
| 0.1313 | 3.6903 | 1406 | 0.9368 | 0.2048 | 0.9368 | 0.9679 |
| 0.1313 | 3.6955 | 1408 | 0.9147 | 0.2048 | 0.9147 | 0.9564 |
| 0.1313 | 3.7008 | 1410 | 0.9070 | 0.2048 | 0.9070 | 0.9524 |
| 0.1313 | 3.7060 | 1412 | 0.8908 | 0.2048 | 0.8908 | 0.9438 |
| 0.1313 | 3.7113 | 1414 | 0.9415 | 0.3210 | 0.9415 | 0.9703 |
| 0.1313 | 3.7165 | 1416 | 1.0274 | 0.1444 | 1.0274 | 1.0136 |
| 0.1313 | 3.7218 | 1418 | 0.9826 | 0.1444 | 0.9826 | 0.9913 |
| 0.1313 | 3.7270 | 1420 | 0.8651 | 0.3210 | 0.8651 | 0.9301 |
| 0.1313 | 3.7323 | 1422 | 0.7702 | -0.0185 | 0.7702 | 0.8776 |
| 0.1313 | 3.7375 | 1424 | 0.7585 | 0.0179 | 0.7585 | 0.8709 |
| 0.1313 | 3.7428 | 1426 | 0.7660 | 0.0179 | 0.7660 | 0.8752 |
| 0.1313 | 3.7480 | 1428 | 0.7726 | 0.0 | 0.7726 | 0.8790 |
| 0.1313 | 3.7533 | 1430 | 0.8643 | 0.2092 | 0.8643 | 0.9297 |
| 0.1313 | 3.7585 | 1432 | 0.9647 | 0.1637 | 0.9647 | 0.9822 |
| 0.1313 | 3.7638 | 1434 | 0.9666 | 0.1637 | 0.9666 | 0.9832 |
| 0.1313 | 3.7690 | 1436 | 0.9043 | 0.3210 | 0.9043 | 0.9509 |
| 0.1313 | 3.7743 | 1438 | 0.8402 | 0.0892 | 0.8402 | 0.9166 |
| 0.1313 | 3.7795 | 1440 | 0.8481 | 0.2092 | 0.8481 | 0.9209 |
| 0.1313 | 3.7848 | 1442 | 0.9195 | 0.1637 | 0.9195 | 0.9589 |
| 0.1313 | 3.7900 | 1444 | 0.9185 | 0.1637 | 0.9185 | 0.9584 |
| 0.1313 | 3.7953 | 1446 | 0.8203 | 0.2361 | 0.8203 | 0.9057 |
| 0.1313 | 3.8005 | 1448 | 0.7960 | 0.3016 | 0.7960 | 0.8922 |
| 0.1313 | 3.8058 | 1450 | 0.7779 | -0.0185 | 0.7779 | 0.8820 |
| 0.1313 | 3.8110 | 1452 | 0.7906 | 0.3016 | 0.7906 | 0.8892 |
| 0.1313 | 3.8163 | 1454 | 0.7860 | -0.0154 | 0.7860 | 0.8866 |
| 0.1313 | 3.8215 | 1456 | 0.8437 | 0.3210 | 0.8437 | 0.9185 |
| 0.1313 | 3.8268 | 1458 | 0.8996 | 0.3210 | 0.8996 | 0.9485 |
| 0.1313 | 3.8320 | 1460 | 0.9179 | 0.3210 | 0.9179 | 0.9581 |
| 0.1313 | 3.8373 | 1462 | 0.8228 | 0.3210 | 0.8228 | 0.9071 |
| 0.1313 | 3.8425 | 1464 | 0.7617 | 0.0 | 0.7617 | 0.8727 |
| 0.1313 | 3.8478 | 1466 | 0.7690 | -0.0154 | 0.7690 | 0.8769 |
| 0.1313 | 3.8530 | 1468 | 0.8533 | 0.3210 | 0.8533 | 0.9237 |
| 0.1313 | 3.8583 | 1470 | 0.9036 | 0.3210 | 0.9036 | 0.9506 |
| 0.1313 | 3.8635 | 1472 | 0.8847 | 0.3210 | 0.8847 | 0.9406 |
| 0.1313 | 3.8688 | 1474 | 0.8545 | 0.3210 | 0.8545 | 0.9244 |
| 0.1313 | 3.8740 | 1476 | 0.8200 | 0.2092 | 0.8200 | 0.9055 |
| 0.1313 | 3.8793 | 1478 | 0.8158 | 0.2092 | 0.8158 | 0.9032 |
| 0.1313 | 3.8845 | 1480 | 0.8079 | 0.1037 | 0.8079 | 0.8988 |
| 0.1313 | 3.8898 | 1482 | 0.7836 | -0.0154 | 0.7836 | 0.8852 |
| 0.1313 | 3.8950 | 1484 | 0.7913 | -0.0154 | 0.7913 | 0.8895 |
| 0.1313 | 3.9003 | 1486 | 0.8356 | 0.0892 | 0.8356 | 0.9141 |
| 0.1313 | 3.9055 | 1488 | 0.9712 | 0.3210 | 0.9712 | 0.9855 |
| 0.1313 | 3.9108 | 1490 | 1.0579 | 0.1444 | 1.0579 | 1.0285 |
| 0.1313 | 3.9160 | 1492 | 1.0206 | 0.2924 | 1.0206 | 1.0102 |
| 0.1313 | 3.9213 | 1494 | 0.9031 | 0.3210 | 0.9031 | 0.9503 |
| 0.1313 | 3.9265 | 1496 | 0.8005 | -0.0154 | 0.8005 | 0.8947 |
| 0.1313 | 3.9318 | 1498 | 0.7932 | 0.0 | 0.7932 | 0.8906 |
| 0.0892 | 3.9370 | 1500 | 0.7914 | 0.0 | 0.7914 | 0.8896 |
| 0.0892 | 3.9423 | 1502 | 0.7977 | -0.0154 | 0.7977 | 0.8931 |
| 0.0892 | 3.9475 | 1504 | 0.8282 | -0.2222 | 0.8282 | 0.9101 |
| 0.0892 | 3.9528 | 1506 | 0.8520 | 0.2092 | 0.8520 | 0.9230 |
| 0.0892 | 3.9580 | 1508 | 0.8237 | -0.2222 | 0.8237 | 0.9076 |
| 0.0892 | 3.9633 | 1510 | 0.8026 | -0.0154 | 0.8026 | 0.8959 |
| 0.0892 | 3.9685 | 1512 | 0.7996 | -0.0154 | 0.7996 | 0.8942 |
| 0.0892 | 3.9738 | 1514 | 0.8212 | -0.2222 | 0.8212 | 0.9062 |
| 0.0892 | 3.9790 | 1516 | 0.8140 | 0.1037 | 0.8140 | 0.9022 |
| 0.0892 | 3.9843 | 1518 | 0.8200 | 0.1081 | 0.8200 | 0.9055 |
| 0.0892 | 3.9895 | 1520 | 0.8354 | 0.2048 | 0.8354 | 0.9140 |
| 0.0892 | 3.9948 | 1522 | 0.8236 | 0.0892 | 0.8236 | 0.9075 |
| 0.0892 | 4.0 | 1524 | 0.8539 | 0.2048 | 0.8539 | 0.9241 |
| 0.0892 | 4.0052 | 1526 | 0.8378 | 0.2048 | 0.8378 | 0.9153 |
| 0.0892 | 4.0105 | 1528 | 0.8407 | 0.2048 | 0.8407 | 0.9169 |
| 0.0892 | 4.0157 | 1530 | 0.8315 | 0.2048 | 0.8315 | 0.9118 |
| 0.0892 | 4.0210 | 1532 | 0.8509 | 0.2092 | 0.8509 | 0.9224 |
| 0.0892 | 4.0262 | 1534 | 0.8337 | 0.2048 | 0.8337 | 0.9131 |
| 0.0892 | 4.0315 | 1536 | 0.8602 | 0.2092 | 0.8602 | 0.9274 |
| 0.0892 | 4.0367 | 1538 | 0.8497 | 0.2092 | 0.8497 | 0.9218 |
| 0.0892 | 4.0420 | 1540 | 0.8107 | 0.0892 | 0.8107 | 0.9004 |
| 0.0892 | 4.0472 | 1542 | 0.8104 | 0.0833 | 0.8104 | 0.9002 |
| 0.0892 | 4.0525 | 1544 | 0.8355 | 0.0833 | 0.8355 | 0.9141 |
| 0.0892 | 4.0577 | 1546 | 0.8398 | 0.0833 | 0.8398 | 0.9164 |
| 0.0892 | 4.0630 | 1548 | 0.8580 | 0.0833 | 0.8580 | 0.9263 |
| 0.0892 | 4.0682 | 1550 | 0.8416 | 0.0833 | 0.8416 | 0.9174 |
| 0.0892 | 4.0735 | 1552 | 0.8468 | 0.0833 | 0.8468 | 0.9202 |
| 0.0892 | 4.0787 | 1554 | 0.8324 | 0.0892 | 0.8324 | 0.9124 |
| 0.0892 | 4.0840 | 1556 | 0.8242 | 0.0892 | 0.8242 | 0.9079 |
| 0.0892 | 4.0892 | 1558 | 0.8089 | 0.2568 | 0.8089 | 0.8994 |
| 0.0892 | 4.0945 | 1560 | 0.8514 | 0.2092 | 0.8514 | 0.9227 |
| 0.0892 | 4.0997 | 1562 | 0.8618 | 0.2092 | 0.8618 | 0.9284 |
| 0.0892 | 4.1050 | 1564 | 0.8439 | 0.2092 | 0.8439 | 0.9187 |
| 0.0892 | 4.1102 | 1566 | 0.7940 | 0.2667 | 0.7940 | 0.8911 |
| 0.0892 | 4.1155 | 1568 | 0.7675 | -0.0342 | 0.7675 | 0.8760 |
| 0.0892 | 4.1207 | 1570 | 0.7869 | -0.0342 | 0.7869 | 0.8871 |
| 0.0892 | 4.1260 | 1572 | 0.8552 | 0.2092 | 0.8552 | 0.9248 |
| 0.0892 | 4.1312 | 1574 | 0.8870 | 0.3210 | 0.8870 | 0.9418 |
| 0.0892 | 4.1365 | 1576 | 0.8345 | 0.3889 | 0.8345 | 0.9135 |
| 0.0892 | 4.1417 | 1578 | 0.7978 | -0.0342 | 0.7978 | 0.8932 |
| 0.0892 | 4.1470 | 1580 | 0.7885 | 0.0 | 0.7885 | 0.8880 |
| 0.0892 | 4.1522 | 1582 | 0.7976 | 0.0 | 0.7976 | 0.8931 |
| 0.0892 | 4.1575 | 1584 | 0.8168 | -0.0154 | 0.8168 | 0.9038 |
| 0.0892 | 4.1627 | 1586 | 0.9263 | 0.2092 | 0.9263 | 0.9624 |
| 0.0892 | 4.1680 | 1588 | 1.0255 | 0.0351 | 1.0255 | 1.0127 |
| 0.0892 | 4.1732 | 1590 | 1.0228 | 0.0351 | 1.0228 | 1.0113 |
| 0.0892 | 4.1785 | 1592 | 0.9447 | 0.2092 | 0.9447 | 0.9720 |
| 0.0892 | 4.1837 | 1594 | 0.8427 | -0.0342 | 0.8427 | 0.9180 |
| 0.0892 | 4.1890 | 1596 | 0.8014 | 0.0 | 0.8014 | 0.8952 |
| 0.0892 | 4.1942 | 1598 | 0.8065 | 0.0179 | 0.8065 | 0.8981 |
| 0.0892 | 4.1995 | 1600 | 0.8005 | 0.0179 | 0.8005 | 0.8947 |
| 0.0892 | 4.2047 | 1602 | 0.8089 | -0.0185 | 0.8089 | 0.8994 |
| 0.0892 | 4.2100 | 1604 | 0.8294 | -0.0342 | 0.8294 | 0.9107 |
| 0.0892 | 4.2152 | 1606 | 0.8608 | 0.0833 | 0.8608 | 0.9278 |
| 0.0892 | 4.2205 | 1608 | 0.9130 | 0.2092 | 0.9130 | 0.9555 |
| 0.0892 | 4.2257 | 1610 | 0.8914 | 0.2092 | 0.8914 | 0.9442 |
| 0.0892 | 4.2310 | 1612 | 0.8749 | 0.2092 | 0.8749 | 0.9354 |
| 0.0892 | 4.2362 | 1614 | 0.8247 | -0.0154 | 0.8247 | 0.9081 |
| 0.0892 | 4.2415 | 1616 | 0.7975 | -0.0154 | 0.7975 | 0.8930 |
| 0.0892 | 4.2467 | 1618 | 0.8028 | -0.0154 | 0.8028 | 0.8960 |
| 0.0892 | 4.2520 | 1620 | 0.8198 | 0.3016 | 0.8198 | 0.9054 |
| 0.0892 | 4.2572 | 1622 | 0.8792 | 0.2092 | 0.8792 | 0.9376 |
| 0.0892 | 4.2625 | 1624 | 0.9072 | 0.2092 | 0.9072 | 0.9525 |
| 0.0892 | 4.2677 | 1626 | 0.8754 | 0.2092 | 0.8754 | 0.9356 |
| 0.0892 | 4.2730 | 1628 | 0.7962 | -0.0342 | 0.7962 | 0.8923 |
| 0.0892 | 4.2782 | 1630 | 0.7727 | 0.0 | 0.7727 | 0.8790 |
| 0.0892 | 4.2835 | 1632 | 0.7824 | 0.0 | 0.7824 | 0.8845 |
| 0.0892 | 4.2887 | 1634 | 0.7758 | 0.0 | 0.7758 | 0.8808 |
| 0.0892 | 4.2940 | 1636 | 0.8194 | -0.0342 | 0.8194 | 0.9052 |
| 0.0892 | 4.2992 | 1638 | 0.9055 | 0.2092 | 0.9055 | 0.9516 |
| 0.0892 | 4.3045 | 1640 | 0.9788 | 0.2092 | 0.9788 | 0.9894 |
| 0.0892 | 4.3097 | 1642 | 0.9598 | 0.2092 | 0.9598 | 0.9797 |
| 0.0892 | 4.3150 | 1644 | 0.8790 | 0.2048 | 0.8790 | 0.9376 |
| 0.0892 | 4.3202 | 1646 | 0.8444 | -0.1871 | 0.8444 | 0.9189 |
| 0.0892 | 4.3255 | 1648 | 0.8345 | 0.0 | 0.8345 | 0.9135 |
| 0.0892 | 4.3307 | 1650 | 0.8389 | -0.0154 | 0.8389 | 0.9159 |
| 0.0892 | 4.3360 | 1652 | 0.8823 | 0.2048 | 0.8823 | 0.9393 |
| 0.0892 | 4.3412 | 1654 | 0.9982 | 0.1852 | 0.9982 | 0.9991 |
| 0.0892 | 4.3465 | 1656 | 1.0302 | 0.1444 | 1.0302 | 1.0150 |
| 0.0892 | 4.3517 | 1658 | 0.9649 | 0.2092 | 0.9649 | 0.9823 |
| 0.0892 | 4.3570 | 1660 | 0.8605 | 0.2092 | 0.8605 | 0.9276 |
| 0.0892 | 4.3622 | 1662 | 0.7969 | 0.0 | 0.7969 | 0.8927 |
| 0.0892 | 4.3675 | 1664 | 0.7879 | 0.0 | 0.7879 | 0.8877 |
| 0.0892 | 4.3727 | 1666 | 0.7865 | 0.0 | 0.7865 | 0.8869 |
| 0.0892 | 4.3780 | 1668 | 0.8385 | 0.2092 | 0.8385 | 0.9157 |
| 0.0892 | 4.3832 | 1670 | 0.8844 | 0.2092 | 0.8844 | 0.9404 |
| 0.0892 | 4.3885 | 1672 | 0.8671 | 0.2092 | 0.8671 | 0.9312 |
| 0.0892 | 4.3937 | 1674 | 0.8455 | 0.2092 | 0.8455 | 0.9195 |
| 0.0892 | 4.3990 | 1676 | 0.8419 | 0.2092 | 0.8419 | 0.9175 |
| 0.0892 | 4.4042 | 1678 | 0.8173 | -0.2222 | 0.8173 | 0.9041 |
| 0.0892 | 4.4094 | 1680 | 0.7913 | -0.0154 | 0.7913 | 0.8895 |
| 0.0892 | 4.4147 | 1682 | 0.7888 | -0.0154 | 0.7888 | 0.8881 |
| 0.0892 | 4.4199 | 1684 | 0.8165 | 0.0833 | 0.8165 | 0.9036 |
| 0.0892 | 4.4252 | 1686 | 0.8626 | 0.3210 | 0.8626 | 0.9287 |
| 0.0892 | 4.4304 | 1688 | 0.8651 | 0.3210 | 0.8651 | 0.9301 |
| 0.0892 | 4.4357 | 1690 | 0.8076 | 0.2092 | 0.8076 | 0.8987 |
| 0.0892 | 4.4409 | 1692 | 0.7726 | 0.3016 | 0.7726 | 0.8790 |
| 0.0892 | 4.4462 | 1694 | 0.7537 | 0.3016 | 0.7537 | 0.8682 |
| 0.0892 | 4.4514 | 1696 | 0.7727 | 0.3016 | 0.7727 | 0.8790 |
| 0.0892 | 4.4567 | 1698 | 0.7989 | 0.2092 | 0.7989 | 0.8938 |
| 0.0892 | 4.4619 | 1700 | 0.7936 | 0.2092 | 0.7936 | 0.8908 |
| 0.0892 | 4.4672 | 1702 | 0.8101 | 0.2092 | 0.8101 | 0.9001 |
| 0.0892 | 4.4724 | 1704 | 0.7920 | 0.3016 | 0.7920 | 0.8899 |
| 0.0892 | 4.4777 | 1706 | 0.7833 | 0.3016 | 0.7833 | 0.8851 |
| 0.0892 | 4.4829 | 1708 | 0.7756 | 0.3016 | 0.7756 | 0.8807 |
| 0.0892 | 4.4882 | 1710 | 0.7784 | 0.3016 | 0.7784 | 0.8823 |
| 0.0892 | 4.4934 | 1712 | 0.8211 | 0.3016 | 0.8211 | 0.9061 |
| 0.0892 | 4.4987 | 1714 | 0.8433 | 0.0833 | 0.8433 | 0.9183 |
| 0.0892 | 4.5039 | 1716 | 0.8503 | 0.0833 | 0.8503 | 0.9221 |
| 0.0892 | 4.5092 | 1718 | 0.8368 | 0.1037 | 0.8368 | 0.9148 |
| 0.0892 | 4.5144 | 1720 | 0.8489 | 0.1037 | 0.8489 | 0.9214 |
| 0.0892 | 4.5197 | 1722 | 0.8955 | 0.2092 | 0.8955 | 0.9463 |
| 0.0892 | 4.5249 | 1724 | 0.8775 | 0.0833 | 0.8775 | 0.9368 |
| 0.0892 | 4.5302 | 1726 | 0.8378 | 0.1037 | 0.8378 | 0.9153 |
| 0.0892 | 4.5354 | 1728 | 0.8129 | 0.1037 | 0.8129 | 0.9016 |
| 0.0892 | 4.5407 | 1730 | 0.7986 | 0.3016 | 0.7986 | 0.8937 |
| 0.0892 | 4.5459 | 1732 | 0.7833 | 0.3016 | 0.7833 | 0.8850 |
| 0.0892 | 4.5512 | 1734 | 0.7829 | 0.3016 | 0.7829 | 0.8848 |
| 0.0892 | 4.5564 | 1736 | 0.8033 | 0.1037 | 0.8033 | 0.8963 |
| 0.0892 | 4.5617 | 1738 | 0.8243 | 0.1037 | 0.8243 | 0.9079 |
| 0.0892 | 4.5669 | 1740 | 0.8305 | 0.1037 | 0.8305 | 0.9113 |
| 0.0892 | 4.5722 | 1742 | 0.8180 | 0.1081 | 0.8180 | 0.9044 |
| 0.0892 | 4.5774 | 1744 | 0.8136 | 0.1081 | 0.8136 | 0.9020 |
| 0.0892 | 4.5827 | 1746 | 0.8161 | 0.1081 | 0.8161 | 0.9034 |
| 0.0892 | 4.5879 | 1748 | 0.8226 | 0.1081 | 0.8226 | 0.9070 |
| 0.0892 | 4.5932 | 1750 | 0.8558 | 0.2048 | 0.8558 | 0.9251 |
| 0.0892 | 4.5984 | 1752 | 0.9614 | 0.2092 | 0.9614 | 0.9805 |
| 0.0892 | 4.6037 | 1754 | 0.9652 | 0.0494 | 0.9652 | 0.9825 |
| 0.0892 | 4.6089 | 1756 | 0.8792 | 0.2092 | 0.8792 | 0.9377 |
| 0.0892 | 4.6142 | 1758 | 0.7971 | 0.2092 | 0.7971 | 0.8928 |
| 0.0892 | 4.6194 | 1760 | 0.7555 | 0.3016 | 0.7555 | 0.8692 |
| 0.0892 | 4.6247 | 1762 | 0.7519 | 0.3016 | 0.7519 | 0.8671 |
| 0.0892 | 4.6299 | 1764 | 0.7866 | 0.4296 | 0.7866 | 0.8869 |
| 0.0892 | 4.6352 | 1766 | 0.8555 | 0.2092 | 0.8555 | 0.9249 |
| 0.0892 | 4.6404 | 1768 | 0.8470 | 0.2092 | 0.8470 | 0.9203 |
| 0.0892 | 4.6457 | 1770 | 0.8095 | 0.3889 | 0.8095 | 0.8997 |
| 0.0892 | 4.6509 | 1772 | 0.7864 | 0.3016 | 0.7864 | 0.8868 |
| 0.0892 | 4.6562 | 1774 | 0.7977 | 0.2878 | 0.7977 | 0.8931 |
| 0.0892 | 4.6614 | 1776 | 0.8131 | 0.0892 | 0.8131 | 0.9017 |
| 0.0892 | 4.6667 | 1778 | 0.8088 | 0.0892 | 0.8088 | 0.8993 |
| 0.0892 | 4.6719 | 1780 | 0.8176 | 0.0892 | 0.8176 | 0.9042 |
| 0.0892 | 4.6772 | 1782 | 0.8336 | 0.0833 | 0.8336 | 0.9130 |
| 0.0892 | 4.6824 | 1784 | 0.8660 | 0.0833 | 0.8660 | 0.9306 |
| 0.0892 | 4.6877 | 1786 | 0.8470 | 0.0833 | 0.8470 | 0.9203 |
| 0.0892 | 4.6929 | 1788 | 0.8042 | 0.0833 | 0.8042 | 0.8968 |
| 0.0892 | 4.6982 | 1790 | 0.7720 | 0.2667 | 0.7720 | 0.8786 |
| 0.0892 | 4.7034 | 1792 | 0.7611 | 0.2878 | 0.7611 | 0.8724 |
| 0.0892 | 4.7087 | 1794 | 0.7811 | 0.2667 | 0.7811 | 0.8838 |
| 0.0892 | 4.7139 | 1796 | 0.8411 | 0.0833 | 0.8411 | 0.9171 |
| 0.0892 | 4.7192 | 1798 | 0.9133 | 0.2092 | 0.9133 | 0.9556 |
| 0.0892 | 4.7244 | 1800 | 0.9431 | 0.1637 | 0.9431 | 0.9711 |
| 0.0892 | 4.7297 | 1802 | 0.8841 | 0.0833 | 0.8841 | 0.9402 |
| 0.0892 | 4.7349 | 1804 | 0.8089 | 0.0892 | 0.8089 | 0.8994 |
| 0.0892 | 4.7402 | 1806 | 0.7967 | 0.2878 | 0.7967 | 0.8926 |
| 0.0892 | 4.7454 | 1808 | 0.8064 | 0.0892 | 0.8064 | 0.8980 |
| 0.0892 | 4.7507 | 1810 | 0.8802 | 0.0833 | 0.8802 | 0.9382 |
| 0.0892 | 4.7559 | 1812 | 1.0575 | 0.1637 | 1.0575 | 1.0283 |
| 0.0892 | 4.7612 | 1814 | 1.1252 | 0.1444 | 1.1252 | 1.0607 |
| 0.0892 | 4.7664 | 1816 | 1.0839 | 0.1444 | 1.0839 | 1.0411 |
| 0.0892 | 4.7717 | 1818 | 0.9707 | 0.1637 | 0.9707 | 0.9853 |
| 0.0892 | 4.7769 | 1820 | 0.8325 | 0.0833 | 0.8325 | 0.9124 |
| 0.0892 | 4.7822 | 1822 | 0.7552 | 0.0 | 0.7552 | 0.8690 |
| 0.0892 | 4.7874 | 1824 | 0.7628 | 0.2080 | 0.7628 | 0.8734 |
| 0.0892 | 4.7927 | 1826 | 0.7627 | 0.2080 | 0.7627 | 0.8733 |
| 0.0892 | 4.7979 | 1828 | 0.7590 | 0.0 | 0.7590 | 0.8712 |
| 0.0892 | 4.8031 | 1830 | 0.8176 | 0.0833 | 0.8176 | 0.9042 |
| 0.0892 | 4.8084 | 1832 | 0.8727 | 0.0833 | 0.8727 | 0.9342 |
| 0.0892 | 4.8136 | 1834 | 0.8504 | 0.0833 | 0.8504 | 0.9222 |
| 0.0892 | 4.8189 | 1836 | 0.8134 | 0.0892 | 0.8134 | 0.9019 |
| 0.0892 | 4.8241 | 1838 | 0.8121 | 0.0892 | 0.8121 | 0.9012 |
| 0.0892 | 4.8294 | 1840 | 0.8245 | 0.0892 | 0.8245 | 0.9080 |
| 0.0892 | 4.8346 | 1842 | 0.8526 | 0.0833 | 0.8526 | 0.9234 |
| 0.0892 | 4.8399 | 1844 | 0.8869 | 0.0833 | 0.8869 | 0.9418 |
| 0.0892 | 4.8451 | 1846 | 0.8595 | 0.0833 | 0.8595 | 0.9271 |
| 0.0892 | 4.8504 | 1848 | 0.8169 | 0.2667 | 0.8169 | 0.9038 |
| 0.0892 | 4.8556 | 1850 | 0.8073 | 0.3016 | 0.8073 | 0.8985 |
| 0.0892 | 4.8609 | 1852 | 0.8179 | 0.2667 | 0.8179 | 0.9044 |
| 0.0892 | 4.8661 | 1854 | 0.8395 | 0.0833 | 0.8395 | 0.9162 |
| 0.0892 | 4.8714 | 1856 | 0.8513 | 0.0833 | 0.8513 | 0.9226 |
| 0.0892 | 4.8766 | 1858 | 0.8679 | 0.0833 | 0.8679 | 0.9316 |
| 0.0892 | 4.8819 | 1860 | 0.8210 | 0.0833 | 0.8210 | 0.9061 |
| 0.0892 | 4.8871 | 1862 | 0.7921 | 0.3016 | 0.7921 | 0.8900 |
| 0.0892 | 4.8924 | 1864 | 0.8076 | 0.2667 | 0.8076 | 0.8987 |
| 0.0892 | 4.8976 | 1866 | 0.8602 | 0.0833 | 0.8602 | 0.9275 |
| 0.0892 | 4.9029 | 1868 | 0.8446 | 0.0833 | 0.8446 | 0.9190 |
| 0.0892 | 4.9081 | 1870 | 0.8056 | 0.2667 | 0.8056 | 0.8976 |
| 0.0892 | 4.9134 | 1872 | 0.8216 | 0.0833 | 0.8216 | 0.9064 |
| 0.0892 | 4.9186 | 1874 | 0.8772 | 0.0833 | 0.8772 | 0.9366 |
| 0.0892 | 4.9239 | 1876 | 0.8664 | 0.0833 | 0.8664 | 0.9308 |
| 0.0892 | 4.9291 | 1878 | 0.8055 | 0.0833 | 0.8055 | 0.8975 |
| 0.0892 | 4.9344 | 1880 | 0.7571 | 0.2878 | 0.7571 | 0.8701 |
| 0.0892 | 4.9396 | 1882 | 0.7550 | 0.2878 | 0.7550 | 0.8689 |
| 0.0892 | 4.9449 | 1884 | 0.7765 | 0.2667 | 0.7765 | 0.8812 |
| 0.0892 | 4.9501 | 1886 | 0.8050 | 0.0833 | 0.8050 | 0.8972 |
| 0.0892 | 4.9554 | 1888 | 0.8373 | 0.0833 | 0.8373 | 0.9150 |
| 0.0892 | 4.9606 | 1890 | 0.8176 | 0.0833 | 0.8176 | 0.9042 |
| 0.0892 | 4.9659 | 1892 | 0.8373 | 0.0833 | 0.8373 | 0.9150 |
| 0.0892 | 4.9711 | 1894 | 0.8483 | 0.0833 | 0.8483 | 0.9210 |
| 0.0892 | 4.9764 | 1896 | 0.8192 | 0.0833 | 0.8192 | 0.9051 |
| 0.0892 | 4.9816 | 1898 | 0.8205 | 0.0833 | 0.8205 | 0.9058 |
| 0.0892 | 4.9869 | 1900 | 0.8366 | 0.0833 | 0.8366 | 0.9147 |
| 0.0892 | 4.9921 | 1902 | 0.8322 | 0.0833 | 0.8322 | 0.9122 |
| 0.0892 | 4.9974 | 1904 | 0.8380 | 0.0833 | 0.8380 | 0.9154 |
| 0.0892 | 5.0026 | 1906 | 0.8209 | 0.2667 | 0.8209 | 0.9060 |
| 0.0892 | 5.0079 | 1908 | 0.8050 | 0.3016 | 0.8050 | 0.8972 |
| 0.0892 | 5.0131 | 1910 | 0.8150 | 0.3016 | 0.8150 | 0.9028 |
| 0.0892 | 5.0184 | 1912 | 0.8206 | 0.2667 | 0.8206 | 0.9059 |
| 0.0892 | 5.0236 | 1914 | 0.8120 | 0.2878 | 0.8120 | 0.9011 |
| 0.0892 | 5.0289 | 1916 | 0.8209 | 0.2568 | 0.8209 | 0.9060 |
| 0.0892 | 5.0341 | 1918 | 0.8495 | 0.0892 | 0.8495 | 0.9217 |
| 0.0892 | 5.0394 | 1920 | 0.8664 | 0.0892 | 0.8664 | 0.9308 |
| 0.0892 | 5.0446 | 1922 | 0.8620 | 0.0892 | 0.8620 | 0.9284 |
| 0.0892 | 5.0499 | 1924 | 0.8440 | 0.0892 | 0.8440 | 0.9187 |
| 0.0892 | 5.0551 | 1926 | 0.8210 | 0.0892 | 0.8210 | 0.9061 |
| 0.0892 | 5.0604 | 1928 | 0.8175 | 0.0892 | 0.8175 | 0.9042 |
| 0.0892 | 5.0656 | 1930 | 0.8003 | 0.1037 | 0.8003 | 0.8946 |
| 0.0892 | 5.0709 | 1932 | 0.7853 | -0.0185 | 0.7853 | 0.8862 |
| 0.0892 | 5.0761 | 1934 | 0.7928 | 0.1037 | 0.7928 | 0.8904 |
| 0.0892 | 5.0814 | 1936 | 0.8075 | 0.0833 | 0.8075 | 0.8986 |
| 0.0892 | 5.0866 | 1938 | 0.8167 | 0.0833 | 0.8167 | 0.9037 |
| 0.0892 | 5.0919 | 1940 | 0.8059 | 0.1081 | 0.8059 | 0.8977 |
| 0.0892 | 5.0971 | 1942 | 0.8116 | 0.1081 | 0.8116 | 0.9009 |
| 0.0892 | 5.1024 | 1944 | 0.8127 | -0.0154 | 0.8127 | 0.9015 |
| 0.0892 | 5.1076 | 1946 | 0.8323 | 0.0892 | 0.8323 | 0.9123 |
| 0.0892 | 5.1129 | 1948 | 0.8942 | 0.2092 | 0.8942 | 0.9456 |
| 0.0892 | 5.1181 | 1950 | 0.9448 | 0.2092 | 0.9448 | 0.9720 |
| 0.0892 | 5.1234 | 1952 | 0.9169 | 0.2092 | 0.9169 | 0.9576 |
| 0.0892 | 5.1286 | 1954 | 0.8816 | 0.0833 | 0.8816 | 0.9389 |
| 0.0892 | 5.1339 | 1956 | 0.8331 | 0.0833 | 0.8331 | 0.9128 |
| 0.0892 | 5.1391 | 1958 | 0.8024 | -0.0154 | 0.8024 | 0.8958 |
| 0.0892 | 5.1444 | 1960 | 0.8055 | 0.0 | 0.8055 | 0.8975 |
| 0.0892 | 5.1496 | 1962 | 0.8068 | 0.0 | 0.8068 | 0.8982 |
| 0.0892 | 5.1549 | 1964 | 0.8078 | 0.2878 | 0.8078 | 0.8988 |
| 0.0892 | 5.1601 | 1966 | 0.8487 | 0.0833 | 0.8487 | 0.9213 |
| 0.0892 | 5.1654 | 1968 | 0.8955 | 0.2092 | 0.8955 | 0.9463 |
| 0.0892 | 5.1706 | 1970 | 0.8847 | 0.0833 | 0.8847 | 0.9406 |
| 0.0892 | 5.1759 | 1972 | 0.8634 | 0.0833 | 0.8634 | 0.9292 |
| 0.0892 | 5.1811 | 1974 | 0.8475 | 0.0833 | 0.8475 | 0.9206 |
| 0.0892 | 5.1864 | 1976 | 0.8330 | 0.0833 | 0.8330 | 0.9127 |
| 0.0892 | 5.1916 | 1978 | 0.8432 | 0.0833 | 0.8432 | 0.9183 |
| 0.0892 | 5.1969 | 1980 | 0.8409 | 0.0833 | 0.8409 | 0.9170 |
| 0.0892 | 5.2021 | 1982 | 0.8435 | 0.0833 | 0.8435 | 0.9184 |
| 0.0892 | 5.2073 | 1984 | 0.8377 | 0.0892 | 0.8377 | 0.9152 |
| 0.0892 | 5.2126 | 1986 | 0.8541 | 0.0892 | 0.8541 | 0.9242 |
| 0.0892 | 5.2178 | 1988 | 0.8731 | 0.0892 | 0.8731 | 0.9344 |
| 0.0892 | 5.2231 | 1990 | 0.9370 | 0.0833 | 0.9370 | 0.9680 |
| 0.0892 | 5.2283 | 1992 | 0.9510 | 0.0833 | 0.9510 | 0.9752 |
| 0.0892 | 5.2336 | 1994 | 0.8961 | 0.0833 | 0.8961 | 0.9466 |
| 0.0892 | 5.2388 | 1996 | 0.8480 | 0.0892 | 0.8480 | 0.9209 |
| 0.0892 | 5.2441 | 1998 | 0.8458 | 0.2568 | 0.8458 | 0.9197 |
| 0.0691 | 5.2493 | 2000 | 0.8478 | 0.0892 | 0.8478 | 0.9208 |
| 0.0691 | 5.2546 | 2002 | 0.8713 | 0.0833 | 0.8713 | 0.9334 |
| 0.0691 | 5.2598 | 2004 | 0.8933 | 0.0833 | 0.8933 | 0.9452 |
| 0.0691 | 5.2651 | 2006 | 0.8749 | 0.0833 | 0.8749 | 0.9354 |
| 0.0691 | 5.2703 | 2008 | 0.8377 | 0.0892 | 0.8377 | 0.9153 |
| 0.0691 | 5.2756 | 2010 | 0.8323 | 0.0892 | 0.8323 | 0.9123 |
| 0.0691 | 5.2808 | 2012 | 0.8527 | 0.0892 | 0.8527 | 0.9234 |
| 0.0691 | 5.2861 | 2014 | 0.8879 | 0.0892 | 0.8879 | 0.9423 |
| 0.0691 | 5.2913 | 2016 | 0.9412 | 0.0833 | 0.9412 | 0.9702 |
| 0.0691 | 5.2966 | 2018 | 0.9650 | 0.0892 | 0.9650 | 0.9824 |
| 0.0691 | 5.3018 | 2020 | 0.9373 | 0.0892 | 0.9373 | 0.9681 |
| 0.0691 | 5.3071 | 2022 | 0.9128 | 0.0892 | 0.9128 | 0.9554 |
| 0.0691 | 5.3123 | 2024 | 0.9024 | 0.0892 | 0.9024 | 0.9499 |
| 0.0691 | 5.3176 | 2026 | 0.9032 | 0.0892 | 0.9032 | 0.9504 |
| 0.0691 | 5.3228 | 2028 | 0.9674 | 0.2092 | 0.9674 | 0.9836 |
| 0.0691 | 5.3281 | 2030 | 0.9954 | 0.1637 | 0.9954 | 0.9977 |
| 0.0691 | 5.3333 | 2032 | 0.9468 | 0.2092 | 0.9468 | 0.9730 |
| 0.0691 | 5.3386 | 2034 | 0.8651 | 0.0833 | 0.8651 | 0.9301 |
| 0.0691 | 5.3438 | 2036 | 0.8126 | -0.1871 | 0.8126 | 0.9015 |
| 0.0691 | 5.3491 | 2038 | 0.8047 | 0.0 | 0.8047 | 0.8971 |
| 0.0691 | 5.3543 | 2040 | 0.8172 | 0.1081 | 0.8172 | 0.9040 |
| 0.0691 | 5.3596 | 2042 | 0.8518 | 0.0833 | 0.8518 | 0.9229 |
| 0.0691 | 5.3648 | 2044 | 0.9086 | 0.2092 | 0.9086 | 0.9532 |
| 0.0691 | 5.3701 | 2046 | 0.9193 | 0.2092 | 0.9193 | 0.9588 |
| 0.0691 | 5.3753 | 2048 | 0.9249 | 0.2092 | 0.9249 | 0.9617 |
| 0.0691 | 5.3806 | 2050 | 0.8749 | 0.0833 | 0.8749 | 0.9354 |
| 0.0691 | 5.3858 | 2052 | 0.8363 | 0.0892 | 0.8363 | 0.9145 |
| 0.0691 | 5.3911 | 2054 | 0.8101 | 0.1081 | 0.8101 | 0.9001 |
| 0.0691 | 5.3963 | 2056 | 0.8004 | 0.1081 | 0.8004 | 0.8947 |
| 0.0691 | 5.4016 | 2058 | 0.8076 | 0.0833 | 0.8076 | 0.8987 |
| 0.0691 | 5.4068 | 2060 | 0.8052 | 0.0833 | 0.8052 | 0.8973 |
| 0.0691 | 5.4121 | 2062 | 0.8356 | 0.0833 | 0.8356 | 0.9141 |
| 0.0691 | 5.4173 | 2064 | 0.8438 | 0.0833 | 0.8438 | 0.9186 |
| 0.0691 | 5.4226 | 2066 | 0.8096 | 0.0833 | 0.8096 | 0.8998 |
| 0.0691 | 5.4278 | 2068 | 0.7880 | 0.2878 | 0.7880 | 0.8877 |
| 0.0691 | 5.4331 | 2070 | 0.7815 | 0.0 | 0.7815 | 0.8840 |
| 0.0691 | 5.4383 | 2072 | 0.7902 | 0.0 | 0.7902 | 0.8889 |
| 0.0691 | 5.4436 | 2074 | 0.8150 | 0.0892 | 0.8150 | 0.9028 |
| 0.0691 | 5.4488 | 2076 | 0.8650 | 0.0833 | 0.8650 | 0.9301 |
| 0.0691 | 5.4541 | 2078 | 0.9021 | 0.0833 | 0.9021 | 0.9498 |
| 0.0691 | 5.4593 | 2080 | 0.8900 | 0.0892 | 0.8900 | 0.9434 |
| 0.0691 | 5.4646 | 2082 | 0.8983 | 0.0833 | 0.8983 | 0.9478 |
| 0.0691 | 5.4698 | 2084 | 0.8828 | 0.0892 | 0.8828 | 0.9396 |
| 0.0691 | 5.4751 | 2086 | 0.8409 | 0.0892 | 0.8409 | 0.9170 |
| 0.0691 | 5.4803 | 2088 | 0.8300 | 0.0892 | 0.8300 | 0.9110 |
| 0.0691 | 5.4856 | 2090 | 0.8225 | 0.0892 | 0.8225 | 0.9069 |
| 0.0691 | 5.4908 | 2092 | 0.8167 | 0.0892 | 0.8167 | 0.9037 |
| 0.0691 | 5.4961 | 2094 | 0.8371 | 0.0892 | 0.8371 | 0.9149 |
| 0.0691 | 5.5013 | 2096 | 0.8525 | 0.0833 | 0.8525 | 0.9233 |
| 0.0691 | 5.5066 | 2098 | 0.8791 | 0.0833 | 0.8791 | 0.9376 |
| 0.0691 | 5.5118 | 2100 | 0.8735 | 0.0833 | 0.8735 | 0.9346 |
| 0.0691 | 5.5171 | 2102 | 0.8351 | 0.0833 | 0.8351 | 0.9139 |
| 0.0691 | 5.5223 | 2104 | 0.7809 | 0.3419 | 0.7809 | 0.8837 |
| 0.0691 | 5.5276 | 2106 | 0.7674 | 0.0 | 0.7674 | 0.8760 |
| 0.0691 | 5.5328 | 2108 | 0.7705 | 0.0 | 0.7705 | 0.8778 |
| 0.0691 | 5.5381 | 2110 | 0.7952 | 0.0833 | 0.7952 | 0.8918 |
| 0.0691 | 5.5433 | 2112 | 0.8208 | 0.0833 | 0.8208 | 0.9060 |
| 0.0691 | 5.5486 | 2114 | 0.8405 | 0.0892 | 0.8405 | 0.9168 |
| 0.0691 | 5.5538 | 2116 | 0.8538 | 0.0892 | 0.8538 | 0.9240 |
| 0.0691 | 5.5591 | 2118 | 0.8905 | 0.0833 | 0.8905 | 0.9437 |
| 0.0691 | 5.5643 | 2120 | 0.8989 | 0.0833 | 0.8989 | 0.9481 |
| 0.0691 | 5.5696 | 2122 | 0.8784 | 0.0833 | 0.8784 | 0.9372 |
| 0.0691 | 5.5748 | 2124 | 0.8555 | 0.0892 | 0.8555 | 0.9250 |
| 0.0691 | 5.5801 | 2126 | 0.8808 | 0.0833 | 0.8808 | 0.9385 |
| 0.0691 | 5.5853 | 2128 | 0.9085 | 0.2092 | 0.9085 | 0.9532 |
| 0.0691 | 5.5906 | 2130 | 0.8793 | 0.0833 | 0.8793 | 0.9377 |
| 0.0691 | 5.5958 | 2132 | 0.8200 | 0.0833 | 0.8200 | 0.9055 |
| 0.0691 | 5.6010 | 2134 | 0.7964 | 0.2878 | 0.7964 | 0.8924 |
| 0.0691 | 5.6063 | 2136 | 0.8085 | 0.0892 | 0.8085 | 0.8992 |
| 0.0691 | 5.6115 | 2138 | 0.8386 | 0.0833 | 0.8386 | 0.9157 |
| 0.0691 | 5.6168 | 2140 | 0.9102 | 0.2092 | 0.9102 | 0.9541 |
| 0.0691 | 5.6220 | 2142 | 0.9871 | 0.1637 | 0.9871 | 0.9935 |
| 0.0691 | 5.6273 | 2144 | 1.0329 | 0.1637 | 1.0329 | 1.0163 |
| 0.0691 | 5.6325 | 2146 | 0.9957 | 0.1637 | 0.9957 | 0.9978 |
| 0.0691 | 5.6378 | 2148 | 0.9520 | 0.0494 | 0.9520 | 0.9757 |
| 0.0691 | 5.6430 | 2150 | 0.8803 | 0.0892 | 0.8803 | 0.9382 |
| 0.0691 | 5.6483 | 2152 | 0.8594 | 0.0892 | 0.8594 | 0.9270 |
| 0.0691 | 5.6535 | 2154 | 0.8637 | 0.0892 | 0.8637 | 0.9294 |
| 0.0691 | 5.6588 | 2156 | 0.8869 | 0.0892 | 0.8869 | 0.9418 |
| 0.0691 | 5.6640 | 2158 | 0.9460 | 0.0833 | 0.9460 | 0.9726 |
| 0.0691 | 5.6693 | 2160 | 1.0159 | 0.0494 | 1.0159 | 1.0079 |
| 0.0691 | 5.6745 | 2162 | 1.0131 | 0.0494 | 1.0131 | 1.0065 |
| 0.0691 | 5.6798 | 2164 | 0.9573 | -0.0784 | 0.9573 | 0.9784 |
| 0.0691 | 5.6850 | 2166 | 0.8897 | 0.0833 | 0.8897 | 0.9432 |
| 0.0691 | 5.6903 | 2168 | 0.8805 | 0.3231 | 0.8805 | 0.9384 |
| 0.0691 | 5.6955 | 2170 | 0.9085 | 0.2080 | 0.9085 | 0.9531 |
| 0.0691 | 5.7008 | 2172 | 0.9107 | 0.2080 | 0.9107 | 0.9543 |
| 0.0691 | 5.7060 | 2174 | 0.9004 | 0.3231 | 0.9004 | 0.9489 |
| 0.0691 | 5.7113 | 2176 | 0.9088 | 0.0892 | 0.9088 | 0.9533 |
| 0.0691 | 5.7165 | 2178 | 0.9318 | 0.0833 | 0.9318 | 0.9653 |
| 0.0691 | 5.7218 | 2180 | 0.9426 | 0.0833 | 0.9426 | 0.9709 |
| 0.0691 | 5.7270 | 2182 | 0.9362 | 0.0833 | 0.9362 | 0.9676 |
| 0.0691 | 5.7323 | 2184 | 0.9142 | 0.0892 | 0.9142 | 0.9561 |
| 0.0691 | 5.7375 | 2186 | 0.9009 | 0.0892 | 0.9009 | 0.9492 |
| 0.0691 | 5.7428 | 2188 | 0.8976 | 0.2568 | 0.8976 | 0.9474 |
| 0.0691 | 5.7480 | 2190 | 0.8900 | 0.2878 | 0.8900 | 0.9434 |
| 0.0691 | 5.7533 | 2192 | 0.8881 | 0.2568 | 0.8881 | 0.9424 |
| 0.0691 | 5.7585 | 2194 | 0.9055 | 0.0833 | 0.9055 | 0.9516 |
| 0.0691 | 5.7638 | 2196 | 0.9395 | 0.0833 | 0.9395 | 0.9693 |
| 0.0691 | 5.7690 | 2198 | 0.9315 | 0.0833 | 0.9315 | 0.9652 |
| 0.0691 | 5.7743 | 2200 | 0.8885 | 0.0833 | 0.8885 | 0.9426 |
| 0.0691 | 5.7795 | 2202 | 0.8450 | 0.2878 | 0.8450 | 0.9192 |
| 0.0691 | 5.7848 | 2204 | 0.8458 | 0.0 | 0.8458 | 0.9197 |
| 0.0691 | 5.7900 | 2206 | 0.8568 | 0.1791 | 0.8568 | 0.9256 |
| 0.0691 | 5.7953 | 2208 | 0.8596 | 0.1791 | 0.8596 | 0.9272 |
| 0.0691 | 5.8005 | 2210 | 0.8519 | 0.3231 | 0.8519 | 0.9230 |
| 0.0691 | 5.8058 | 2212 | 0.8689 | 0.0892 | 0.8689 | 0.9321 |
| 0.0691 | 5.8110 | 2214 | 0.9024 | 0.0833 | 0.9024 | 0.9500 |
| 0.0691 | 5.8163 | 2216 | 0.9286 | 0.0833 | 0.9286 | 0.9636 |
| 0.0691 | 5.8215 | 2218 | 0.9357 | 0.0833 | 0.9357 | 0.9673 |
| 0.0691 | 5.8268 | 2220 | 0.9356 | 0.0833 | 0.9356 | 0.9673 |
| 0.0691 | 5.8320 | 2222 | 0.9175 | 0.0833 | 0.9175 | 0.9578 |
| 0.0691 | 5.8373 | 2224 | 0.8886 | 0.2568 | 0.8886 | 0.9427 |
| 0.0691 | 5.8425 | 2226 | 0.8778 | 0.2568 | 0.8778 | 0.9369 |
| 0.0691 | 5.8478 | 2228 | 0.8769 | 0.2667 | 0.8769 | 0.9364 |
| 0.0691 | 5.8530 | 2230 | 0.8896 | 0.0833 | 0.8896 | 0.9432 |
| 0.0691 | 5.8583 | 2232 | 0.8916 | 0.0833 | 0.8916 | 0.9443 |
| 0.0691 | 5.8635 | 2234 | 0.8777 | 0.2667 | 0.8777 | 0.9369 |
| 0.0691 | 5.8688 | 2236 | 0.8697 | 0.2667 | 0.8697 | 0.9326 |
| 0.0691 | 5.8740 | 2238 | 0.8716 | 0.2878 | 0.8716 | 0.9336 |
| 0.0691 | 5.8793 | 2240 | 0.8794 | 0.2568 | 0.8794 | 0.9378 |
| 0.0691 | 5.8845 | 2242 | 0.8920 | 0.2568 | 0.8920 | 0.9445 |
| 0.0691 | 5.8898 | 2244 | 0.8918 | 0.2568 | 0.8918 | 0.9444 |
| 0.0691 | 5.8950 | 2246 | 0.8853 | 0.2568 | 0.8853 | 0.9409 |
| 0.0691 | 5.9003 | 2248 | 0.8887 | 0.2568 | 0.8887 | 0.9427 |
| 0.0691 | 5.9055 | 2250 | 0.8939 | 0.2568 | 0.8939 | 0.9455 |
| 0.0691 | 5.9108 | 2252 | 0.9066 | 0.0892 | 0.9066 | 0.9521 |
| 0.0691 | 5.9160 | 2254 | 0.9332 | 0.0892 | 0.9332 | 0.9660 |
| 0.0691 | 5.9213 | 2256 | 0.9289 | 0.0892 | 0.9289 | 0.9638 |
| 0.0691 | 5.9265 | 2258 | 0.9107 | 0.0892 | 0.9107 | 0.9543 |
| 0.0691 | 5.9318 | 2260 | 0.8937 | 0.0892 | 0.8937 | 0.9454 |
| 0.0691 | 5.9370 | 2262 | 0.8839 | 0.0892 | 0.8839 | 0.9402 |
| 0.0691 | 5.9423 | 2264 | 0.8854 | 0.0892 | 0.8854 | 0.9410 |
| 0.0691 | 5.9475 | 2266 | 0.8803 | 0.0892 | 0.8803 | 0.9382 |
| 0.0691 | 5.9528 | 2268 | 0.8844 | 0.0892 | 0.8844 | 0.9404 |
| 0.0691 | 5.9580 | 2270 | 0.8840 | 0.0833 | 0.8840 | 0.9402 |
| 0.0691 | 5.9633 | 2272 | 0.8782 | 0.0833 | 0.8782 | 0.9371 |
| 0.0691 | 5.9685 | 2274 | 0.8700 | 0.0833 | 0.8700 | 0.9328 |
| 0.0691 | 5.9738 | 2276 | 0.8641 | 0.2667 | 0.8641 | 0.9296 |
| 0.0691 | 5.9790 | 2278 | 0.8528 | 0.2568 | 0.8528 | 0.9234 |
| 0.0691 | 5.9843 | 2280 | 0.8528 | 0.2667 | 0.8528 | 0.9235 |
| 0.0691 | 5.9895 | 2282 | 0.8719 | 0.0833 | 0.8719 | 0.9337 |
| 0.0691 | 5.9948 | 2284 | 0.8904 | 0.2092 | 0.8904 | 0.9436 |
| 0.0691 | 6.0 | 2286 | 0.9232 | 0.0494 | 0.9232 | 0.9608 |
| 0.0691 | 6.0052 | 2288 | 0.9108 | 0.0494 | 0.9108 | 0.9544 |
| 0.0691 | 6.0105 | 2290 | 0.8821 | 0.0833 | 0.8821 | 0.9392 |
| 0.0691 | 6.0157 | 2292 | 0.8648 | 0.2878 | 0.8648 | 0.9300 |
| 0.0691 | 6.0210 | 2294 | 0.8735 | 0.4615 | 0.8735 | 0.9346 |
| 0.0691 | 6.0262 | 2296 | 0.8646 | 0.4615 | 0.8646 | 0.9298 |
| 0.0691 | 6.0315 | 2298 | 0.8571 | 0.2568 | 0.8571 | 0.9258 |
| 0.0691 | 6.0367 | 2300 | 0.8695 | 0.0833 | 0.8695 | 0.9325 |
| 0.0691 | 6.0420 | 2302 | 0.8867 | 0.2092 | 0.8867 | 0.9416 |
| 0.0691 | 6.0472 | 2304 | 0.8810 | 0.2092 | 0.8810 | 0.9386 |
| 0.0691 | 6.0525 | 2306 | 0.8618 | 0.2667 | 0.8618 | 0.9283 |
| 0.0691 | 6.0577 | 2308 | 0.8406 | 0.3016 | 0.8406 | 0.9168 |
| 0.0691 | 6.0630 | 2310 | 0.8347 | 0.3231 | 0.8347 | 0.9136 |
| 0.0691 | 6.0682 | 2312 | 0.8369 | 0.3231 | 0.8369 | 0.9148 |
| 0.0691 | 6.0735 | 2314 | 0.8421 | 0.3231 | 0.8421 | 0.9177 |
| 0.0691 | 6.0787 | 2316 | 0.8540 | 0.3231 | 0.8540 | 0.9241 |
| 0.0691 | 6.0840 | 2318 | 0.8665 | 0.2878 | 0.8665 | 0.9309 |
| 0.0691 | 6.0892 | 2320 | 0.8845 | 0.2568 | 0.8845 | 0.9405 |
| 0.0691 | 6.0945 | 2322 | 0.8924 | 0.2568 | 0.8924 | 0.9447 |
| 0.0691 | 6.0997 | 2324 | 0.9019 | 0.2568 | 0.9019 | 0.9497 |
| 0.0691 | 6.1050 | 2326 | 0.9286 | 0.0892 | 0.9286 | 0.9636 |
| 0.0691 | 6.1102 | 2328 | 0.9455 | 0.0892 | 0.9455 | 0.9724 |
| 0.0691 | 6.1155 | 2330 | 0.9358 | 0.0892 | 0.9358 | 0.9674 |
| 0.0691 | 6.1207 | 2332 | 0.9115 | 0.0892 | 0.9115 | 0.9547 |
| 0.0691 | 6.1260 | 2334 | 0.9037 | 0.2568 | 0.9037 | 0.9506 |
| 0.0691 | 6.1312 | 2336 | 0.8962 | 0.2568 | 0.8962 | 0.9467 |
| 0.0691 | 6.1365 | 2338 | 0.9019 | 0.0892 | 0.9019 | 0.9497 |
| 0.0691 | 6.1417 | 2340 | 0.9044 | 0.0892 | 0.9044 | 0.9510 |
| 0.0691 | 6.1470 | 2342 | 0.9151 | 0.0892 | 0.9151 | 0.9566 |
| 0.0691 | 6.1522 | 2344 | 0.9130 | 0.0892 | 0.9130 | 0.9555 |
| 0.0691 | 6.1575 | 2346 | 0.9086 | 0.2568 | 0.9086 | 0.9532 |
| 0.0691 | 6.1627 | 2348 | 0.9051 | 0.2568 | 0.9051 | 0.9514 |
| 0.0691 | 6.1680 | 2350 | 0.8972 | 0.2568 | 0.8972 | 0.9472 |
| 0.0691 | 6.1732 | 2352 | 0.8912 | 0.2568 | 0.8912 | 0.9440 |
| 0.0691 | 6.1785 | 2354 | 0.8965 | 0.0892 | 0.8965 | 0.9468 |
| 0.0691 | 6.1837 | 2356 | 0.9122 | 0.0833 | 0.9122 | 0.9551 |
| 0.0691 | 6.1890 | 2358 | 0.9298 | 0.0833 | 0.9298 | 0.9643 |
| 0.0691 | 6.1942 | 2360 | 0.9296 | 0.0833 | 0.9296 | 0.9642 |
| 0.0691 | 6.1995 | 2362 | 0.9270 | 0.0892 | 0.9270 | 0.9628 |
| 0.0691 | 6.2047 | 2364 | 0.9211 | 0.0892 | 0.9211 | 0.9598 |
| 0.0691 | 6.2100 | 2366 | 0.9282 | 0.0892 | 0.9282 | 0.9634 |
| 0.0691 | 6.2152 | 2368 | 0.9226 | 0.0892 | 0.9226 | 0.9605 |
| 0.0691 | 6.2205 | 2370 | 0.9058 | 0.0892 | 0.9058 | 0.9518 |
| 0.0691 | 6.2257 | 2372 | 0.8992 | 0.0892 | 0.8992 | 0.9483 |
| 0.0691 | 6.2310 | 2374 | 0.8810 | 0.2568 | 0.8810 | 0.9386 |
| 0.0691 | 6.2362 | 2376 | 0.8734 | 0.2568 | 0.8734 | 0.9346 |
| 0.0691 | 6.2415 | 2378 | 0.8760 | 0.2568 | 0.8760 | 0.9360 |
| 0.0691 | 6.2467 | 2380 | 0.8829 | 0.0892 | 0.8829 | 0.9396 |
| 0.0691 | 6.2520 | 2382 | 0.8764 | 0.0892 | 0.8764 | 0.9362 |
| 0.0691 | 6.2572 | 2384 | 0.8828 | 0.0892 | 0.8828 | 0.9396 |
| 0.0691 | 6.2625 | 2386 | 0.8951 | 0.0892 | 0.8951 | 0.9461 |
| 0.0691 | 6.2677 | 2388 | 0.9021 | 0.0892 | 0.9021 | 0.9498 |
| 0.0691 | 6.2730 | 2390 | 0.9209 | 0.0833 | 0.9209 | 0.9596 |
| 0.0691 | 6.2782 | 2392 | 0.9216 | 0.0833 | 0.9216 | 0.9600 |
| 0.0691 | 6.2835 | 2394 | 0.8967 | 0.0892 | 0.8967 | 0.9469 |
| 0.0691 | 6.2887 | 2396 | 0.8802 | 0.0892 | 0.8802 | 0.9382 |
| 0.0691 | 6.2940 | 2398 | 0.8696 | 0.2568 | 0.8696 | 0.9325 |
| 0.0691 | 6.2992 | 2400 | 0.8681 | 0.2568 | 0.8681 | 0.9317 |
| 0.0691 | 6.3045 | 2402 | 0.8719 | 0.0892 | 0.8719 | 0.9337 |
| 0.0691 | 6.3097 | 2404 | 0.8668 | 0.2568 | 0.8668 | 0.9310 |
| 0.0691 | 6.3150 | 2406 | 0.8663 | 0.2568 | 0.8663 | 0.9308 |
| 0.0691 | 6.3202 | 2408 | 0.8677 | 0.2568 | 0.8677 | 0.9315 |
| 0.0691 | 6.3255 | 2410 | 0.8705 | 0.2568 | 0.8705 | 0.9330 |
| 0.0691 | 6.3307 | 2412 | 0.8692 | 0.2878 | 0.8692 | 0.9323 |
| 0.0691 | 6.3360 | 2414 | 0.8710 | 0.2878 | 0.8710 | 0.9333 |
| 0.0691 | 6.3412 | 2416 | 0.8705 | 0.2878 | 0.8705 | 0.9330 |
| 0.0691 | 6.3465 | 2418 | 0.8799 | 0.0892 | 0.8799 | 0.9380 |
| 0.0691 | 6.3517 | 2420 | 0.8964 | 0.0892 | 0.8964 | 0.9468 |
| 0.0691 | 6.3570 | 2422 | 0.9241 | 0.0833 | 0.9241 | 0.9613 |
| 0.0691 | 6.3622 | 2424 | 0.9313 | 0.0833 | 0.9313 | 0.9651 |
| 0.0691 | 6.3675 | 2426 | 0.9409 | 0.0833 | 0.9409 | 0.9700 |
| 0.0691 | 6.3727 | 2428 | 0.9167 | 0.0892 | 0.9167 | 0.9575 |
| 0.0691 | 6.3780 | 2430 | 0.8944 | 0.0892 | 0.8944 | 0.9457 |
| 0.0691 | 6.3832 | 2432 | 0.8956 | 0.0892 | 0.8956 | 0.9463 |
| 0.0691 | 6.3885 | 2434 | 0.9037 | 0.0892 | 0.9037 | 0.9507 |
| 0.0691 | 6.3937 | 2436 | 0.9089 | 0.0892 | 0.9089 | 0.9534 |
| 0.0691 | 6.3990 | 2438 | 0.9121 | 0.0892 | 0.9121 | 0.9550 |
| 0.0691 | 6.4042 | 2440 | 0.9188 | 0.0892 | 0.9188 | 0.9586 |
| 0.0691 | 6.4094 | 2442 | 0.9296 | 0.0892 | 0.9296 | 0.9642 |
| 0.0691 | 6.4147 | 2444 | 0.9357 | 0.0892 | 0.9357 | 0.9673 |
| 0.0691 | 6.4199 | 2446 | 0.9386 | 0.0892 | 0.9386 | 0.9688 |
| 0.0691 | 6.4252 | 2448 | 0.9412 | 0.0892 | 0.9412 | 0.9701 |
| 0.0691 | 6.4304 | 2450 | 0.9373 | 0.0892 | 0.9373 | 0.9681 |
| 0.0691 | 6.4357 | 2452 | 0.9267 | 0.0892 | 0.9267 | 0.9627 |
| 0.0691 | 6.4409 | 2454 | 0.9282 | 0.0892 | 0.9282 | 0.9634 |
| 0.0691 | 6.4462 | 2456 | 0.9485 | 0.0833 | 0.9485 | 0.9739 |
| 0.0691 | 6.4514 | 2458 | 0.9503 | 0.0892 | 0.9503 | 0.9748 |
| 0.0691 | 6.4567 | 2460 | 0.9447 | 0.0892 | 0.9447 | 0.9720 |
| 0.0691 | 6.4619 | 2462 | 0.9552 | 0.0892 | 0.9552 | 0.9774 |
| 0.0691 | 6.4672 | 2464 | 0.9667 | 0.0892 | 0.9667 | 0.9832 |
| 0.0691 | 6.4724 | 2466 | 0.9739 | 0.0892 | 0.9739 | 0.9869 |
| 0.0691 | 6.4777 | 2468 | 0.9763 | 0.0892 | 0.9763 | 0.9881 |
| 0.0691 | 6.4829 | 2470 | 0.9664 | 0.0892 | 0.9664 | 0.9830 |
| 0.0691 | 6.4882 | 2472 | 0.9590 | 0.0892 | 0.9590 | 0.9793 |
| 0.0691 | 6.4934 | 2474 | 0.9631 | 0.0892 | 0.9631 | 0.9814 |
| 0.0691 | 6.4987 | 2476 | 0.9689 | 0.0892 | 0.9689 | 0.9843 |
| 0.0691 | 6.5039 | 2478 | 0.9822 | 0.0892 | 0.9822 | 0.9910 |
| 0.0691 | 6.5092 | 2480 | 1.0197 | -0.0784 | 1.0197 | 1.0098 |
| 0.0691 | 6.5144 | 2482 | 1.0549 | -0.0784 | 1.0549 | 1.0271 |
| 0.0691 | 6.5197 | 2484 | 1.0492 | -0.0784 | 1.0492 | 1.0243 |
| 0.0691 | 6.5249 | 2486 | 1.0145 | 0.0833 | 1.0145 | 1.0072 |
| 0.0691 | 6.5302 | 2488 | 0.9845 | 0.0892 | 0.9845 | 0.9922 |
| 0.0691 | 6.5354 | 2490 | 0.9721 | 0.0892 | 0.9721 | 0.9860 |
| 0.0691 | 6.5407 | 2492 | 0.9781 | 0.0892 | 0.9781 | 0.9890 |
| 0.0691 | 6.5459 | 2494 | 0.9868 | 0.0892 | 0.9868 | 0.9934 |
| 0.0691 | 6.5512 | 2496 | 0.9870 | 0.0892 | 0.9870 | 0.9935 |
| 0.0691 | 6.5564 | 2498 | 0.9815 | 0.0892 | 0.9815 | 0.9907 |
| 0.0543 | 6.5617 | 2500 | 0.9864 | 0.0892 | 0.9864 | 0.9932 |
| 0.0543 | 6.5669 | 2502 | 0.9921 | 0.0833 | 0.9921 | 0.9960 |
| 0.0543 | 6.5722 | 2504 | 0.9745 | 0.0892 | 0.9745 | 0.9872 |
| 0.0543 | 6.5774 | 2506 | 0.9604 | 0.0892 | 0.9604 | 0.9800 |
| 0.0543 | 6.5827 | 2508 | 0.9482 | 0.2568 | 0.9482 | 0.9738 |
| 0.0543 | 6.5879 | 2510 | 0.9473 | 0.2568 | 0.9473 | 0.9733 |
| 0.0543 | 6.5932 | 2512 | 0.9442 | 0.2878 | 0.9442 | 0.9717 |
| 0.0543 | 6.5984 | 2514 | 0.9406 | 0.2568 | 0.9406 | 0.9699 |
| 0.0543 | 6.6037 | 2516 | 0.9549 | 0.0892 | 0.9549 | 0.9772 |
| 0.0543 | 6.6089 | 2518 | 0.9626 | 0.0892 | 0.9626 | 0.9811 |
| 0.0543 | 6.6142 | 2520 | 0.9550 | 0.0892 | 0.9550 | 0.9772 |
| 0.0543 | 6.6194 | 2522 | 0.9540 | 0.0892 | 0.9540 | 0.9767 |
| 0.0543 | 6.6247 | 2524 | 0.9494 | 0.0892 | 0.9494 | 0.9744 |
| 0.0543 | 6.6299 | 2526 | 0.9330 | 0.2568 | 0.9330 | 0.9659 |
| 0.0543 | 6.6352 | 2528 | 0.9232 | 0.2568 | 0.9232 | 0.9608 |
| 0.0543 | 6.6404 | 2530 | 0.9202 | 0.2568 | 0.9202 | 0.9593 |
| 0.0543 | 6.6457 | 2532 | 0.9199 | 0.2568 | 0.9199 | 0.9591 |
| 0.0543 | 6.6509 | 2534 | 0.9262 | 0.2568 | 0.9262 | 0.9624 |
| 0.0543 | 6.6562 | 2536 | 0.9446 | 0.0892 | 0.9446 | 0.9719 |
| 0.0543 | 6.6614 | 2538 | 0.9629 | 0.0833 | 0.9629 | 0.9813 |
| 0.0543 | 6.6667 | 2540 | 0.9769 | -0.0784 | 0.9769 | 0.9884 |
| 0.0543 | 6.6719 | 2542 | 0.9702 | 0.0892 | 0.9702 | 0.9850 |
| 0.0543 | 6.6772 | 2544 | 0.9700 | 0.2568 | 0.9700 | 0.9849 |
| 0.0543 | 6.6824 | 2546 | 0.9836 | 0.2568 | 0.9836 | 0.9918 |
| 0.0543 | 6.6877 | 2548 | 0.9909 | 0.2568 | 0.9909 | 0.9954 |
| 0.0543 | 6.6929 | 2550 | 0.9955 | 0.0892 | 0.9955 | 0.9977 |
| 0.0543 | 6.6982 | 2552 | 1.0157 | -0.0602 | 1.0157 | 1.0078 |
| 0.0543 | 6.7034 | 2554 | 1.0333 | -0.0784 | 1.0333 | 1.0165 |
| 0.0543 | 6.7087 | 2556 | 1.0218 | -0.0784 | 1.0218 | 1.0108 |
| 0.0543 | 6.7139 | 2558 | 0.9983 | -0.0784 | 0.9983 | 0.9991 |
| 0.0543 | 6.7192 | 2560 | 0.9681 | 0.0892 | 0.9681 | 0.9839 |
| 0.0543 | 6.7244 | 2562 | 0.9547 | 0.0892 | 0.9547 | 0.9771 |
| 0.0543 | 6.7297 | 2564 | 0.9546 | 0.0892 | 0.9546 | 0.9770 |
| 0.0543 | 6.7349 | 2566 | 0.9537 | 0.0833 | 0.9537 | 0.9766 |
| 0.0543 | 6.7402 | 2568 | 0.9467 | 0.0892 | 0.9467 | 0.9730 |
| 0.0543 | 6.7454 | 2570 | 0.9304 | 0.2568 | 0.9304 | 0.9646 |
| 0.0543 | 6.7507 | 2572 | 0.9285 | 0.2568 | 0.9285 | 0.9636 |
| 0.0543 | 6.7559 | 2574 | 0.9378 | 0.0892 | 0.9378 | 0.9684 |
| 0.0543 | 6.7612 | 2576 | 0.9388 | 0.0833 | 0.9388 | 0.9689 |
| 0.0543 | 6.7664 | 2578 | 0.9546 | 0.0833 | 0.9546 | 0.9770 |
| 0.0543 | 6.7717 | 2580 | 0.9622 | 0.0833 | 0.9622 | 0.9809 |
| 0.0543 | 6.7769 | 2582 | 0.9521 | 0.0833 | 0.9521 | 0.9758 |
| 0.0543 | 6.7822 | 2584 | 0.9370 | 0.0833 | 0.9370 | 0.9680 |
| 0.0543 | 6.7874 | 2586 | 0.9223 | 0.2568 | 0.9223 | 0.9603 |
| 0.0543 | 6.7927 | 2588 | 0.9178 | 0.2568 | 0.9178 | 0.9580 |
| 0.0543 | 6.7979 | 2590 | 0.9151 | 0.2878 | 0.9151 | 0.9566 |
| 0.0543 | 6.8031 | 2592 | 0.9193 | 0.2568 | 0.9193 | 0.9588 |
| 0.0543 | 6.8084 | 2594 | 0.9345 | 0.0892 | 0.9345 | 0.9667 |
| 0.0543 | 6.8136 | 2596 | 0.9383 | 0.0892 | 0.9383 | 0.9686 |
| 0.0543 | 6.8189 | 2598 | 0.9364 | 0.0892 | 0.9364 | 0.9677 |
| 0.0543 | 6.8241 | 2600 | 0.9313 | 0.0892 | 0.9313 | 0.9650 |
| 0.0543 | 6.8294 | 2602 | 0.9357 | 0.0892 | 0.9357 | 0.9673 |
| 0.0543 | 6.8346 | 2604 | 0.9470 | 0.0892 | 0.9470 | 0.9731 |
| 0.0543 | 6.8399 | 2606 | 0.9544 | 0.0892 | 0.9544 | 0.9769 |
| 0.0543 | 6.8451 | 2608 | 0.9669 | 0.0892 | 0.9669 | 0.9833 |
| 0.0543 | 6.8504 | 2610 | 0.9805 | 0.0892 | 0.9805 | 0.9902 |
| 0.0543 | 6.8556 | 2612 | 0.9934 | 0.0892 | 0.9934 | 0.9967 |
| 0.0543 | 6.8609 | 2614 | 0.9924 | 0.0892 | 0.9924 | 0.9962 |
| 0.0543 | 6.8661 | 2616 | 0.9809 | 0.0892 | 0.9809 | 0.9904 |
| 0.0543 | 6.8714 | 2618 | 0.9839 | 0.0892 | 0.9839 | 0.9919 |
| 0.0543 | 6.8766 | 2620 | 0.9929 | 0.0892 | 0.9929 | 0.9964 |
| 0.0543 | 6.8819 | 2622 | 0.9983 | -0.0784 | 0.9983 | 0.9992 |
| 0.0543 | 6.8871 | 2624 | 0.9779 | 0.0833 | 0.9779 | 0.9889 |
| 0.0543 | 6.8924 | 2626 | 0.9634 | 0.0892 | 0.9634 | 0.9815 |
| 0.0543 | 6.8976 | 2628 | 0.9463 | 0.0892 | 0.9463 | 0.9728 |
| 0.0543 | 6.9029 | 2630 | 0.9493 | 0.0892 | 0.9493 | 0.9743 |
| 0.0543 | 6.9081 | 2632 | 0.9598 | 0.0892 | 0.9598 | 0.9797 |
| 0.0543 | 6.9134 | 2634 | 0.9807 | 0.0833 | 0.9807 | 0.9903 |
| 0.0543 | 6.9186 | 2636 | 0.9985 | -0.0784 | 0.9985 | 0.9992 |
| 0.0543 | 6.9239 | 2638 | 0.9944 | -0.0784 | 0.9944 | 0.9972 |
| 0.0543 | 6.9291 | 2640 | 0.9802 | 0.0833 | 0.9802 | 0.9901 |
| 0.0543 | 6.9344 | 2642 | 0.9635 | 0.0892 | 0.9635 | 0.9816 |
| 0.0543 | 6.9396 | 2644 | 0.9597 | 0.0892 | 0.9597 | 0.9797 |
| 0.0543 | 6.9449 | 2646 | 0.9601 | 0.0833 | 0.9601 | 0.9798 |
| 0.0543 | 6.9501 | 2648 | 0.9578 | 0.0833 | 0.9578 | 0.9787 |
| 0.0543 | 6.9554 | 2650 | 0.9655 | -0.0784 | 0.9655 | 0.9826 |
| 0.0543 | 6.9606 | 2652 | 0.9692 | -0.0784 | 0.9692 | 0.9845 |
| 0.0543 | 6.9659 | 2654 | 0.9596 | -0.0784 | 0.9596 | 0.9796 |
| 0.0543 | 6.9711 | 2656 | 0.9488 | 0.0833 | 0.9488 | 0.9741 |
| 0.0543 | 6.9764 | 2658 | 0.9396 | 0.2568 | 0.9396 | 0.9693 |
| 0.0543 | 6.9816 | 2660 | 0.9443 | 0.2568 | 0.9443 | 0.9718 |
| 0.0543 | 6.9869 | 2662 | 0.9509 | 0.2568 | 0.9509 | 0.9751 |
| 0.0543 | 6.9921 | 2664 | 0.9570 | 0.0892 | 0.9570 | 0.9783 |
| 0.0543 | 6.9974 | 2666 | 0.9583 | 0.0892 | 0.9583 | 0.9789 |
| 0.0543 | 7.0026 | 2668 | 0.9602 | 0.0892 | 0.9602 | 0.9799 |
| 0.0543 | 7.0079 | 2670 | 0.9616 | 0.0892 | 0.9616 | 0.9806 |
| 0.0543 | 7.0131 | 2672 | 0.9875 | -0.0784 | 0.9875 | 0.9937 |
| 0.0543 | 7.0184 | 2674 | 1.0030 | -0.0784 | 1.0030 | 1.0015 |
| 0.0543 | 7.0236 | 2676 | 0.9870 | -0.0784 | 0.9870 | 0.9935 |
| 0.0543 | 7.0289 | 2678 | 0.9549 | 0.0833 | 0.9549 | 0.9772 |
| 0.0543 | 7.0341 | 2680 | 0.9309 | 0.0833 | 0.9309 | 0.9648 |
| 0.0543 | 7.0394 | 2682 | 0.9138 | 0.0892 | 0.9138 | 0.9559 |
| 0.0543 | 7.0446 | 2684 | 0.9063 | 0.2568 | 0.9063 | 0.9520 |
| 0.0543 | 7.0499 | 2686 | 0.9049 | 0.2568 | 0.9049 | 0.9513 |
| 0.0543 | 7.0551 | 2688 | 0.8997 | 0.2878 | 0.8997 | 0.9485 |
| 0.0543 | 7.0604 | 2690 | 0.9061 | 0.2568 | 0.9061 | 0.9519 |
| 0.0543 | 7.0656 | 2692 | 0.9145 | 0.0833 | 0.9145 | 0.9563 |
| 0.0543 | 7.0709 | 2694 | 0.9177 | 0.0833 | 0.9177 | 0.9580 |
| 0.0543 | 7.0761 | 2696 | 0.9222 | 0.0833 | 0.9222 | 0.9603 |
| 0.0543 | 7.0814 | 2698 | 0.9394 | 0.0833 | 0.9394 | 0.9692 |
| 0.0543 | 7.0866 | 2700 | 0.9499 | 0.0833 | 0.9499 | 0.9746 |
| 0.0543 | 7.0919 | 2702 | 0.9492 | 0.0833 | 0.9492 | 0.9743 |
| 0.0543 | 7.0971 | 2704 | 0.9431 | 0.0892 | 0.9431 | 0.9711 |
| 0.0543 | 7.1024 | 2706 | 0.9443 | 0.0892 | 0.9443 | 0.9717 |
| 0.0543 | 7.1076 | 2708 | 0.9481 | 0.0892 | 0.9481 | 0.9737 |
| 0.0543 | 7.1129 | 2710 | 0.9519 | 0.0833 | 0.9519 | 0.9756 |
| 0.0543 | 7.1181 | 2712 | 0.9487 | 0.0892 | 0.9487 | 0.9740 |
| 0.0543 | 7.1234 | 2714 | 0.9494 | 0.0892 | 0.9494 | 0.9744 |
| 0.0543 | 7.1286 | 2716 | 0.9607 | 0.0892 | 0.9607 | 0.9802 |
| 0.0543 | 7.1339 | 2718 | 0.9785 | 0.0892 | 0.9785 | 0.9892 |
| 0.0543 | 7.1391 | 2720 | 1.0039 | 0.0833 | 1.0039 | 1.0020 |
| 0.0543 | 7.1444 | 2722 | 1.0278 | -0.0784 | 1.0278 | 1.0138 |
| 0.0543 | 7.1496 | 2724 | 1.0234 | 0.0833 | 1.0234 | 1.0116 |
| 0.0543 | 7.1549 | 2726 | 1.0111 | 0.0892 | 1.0111 | 1.0055 |
| 0.0543 | 7.1601 | 2728 | 0.9946 | 0.0892 | 0.9946 | 0.9973 |
| 0.0543 | 7.1654 | 2730 | 0.9796 | 0.0892 | 0.9796 | 0.9897 |
| 0.0543 | 7.1706 | 2732 | 0.9662 | 0.2568 | 0.9662 | 0.9830 |
| 0.0543 | 7.1759 | 2734 | 0.9580 | 0.2568 | 0.9580 | 0.9788 |
| 0.0543 | 7.1811 | 2736 | 0.9602 | 0.0892 | 0.9602 | 0.9799 |
| 0.0543 | 7.1864 | 2738 | 0.9687 | 0.0892 | 0.9687 | 0.9842 |
| 0.0543 | 7.1916 | 2740 | 0.9862 | 0.0833 | 0.9862 | 0.9931 |
| 0.0543 | 7.1969 | 2742 | 0.9910 | 0.0833 | 0.9910 | 0.9955 |
| 0.0543 | 7.2021 | 2744 | 0.9857 | 0.0892 | 0.9857 | 0.9928 |
| 0.0543 | 7.2073 | 2746 | 0.9893 | 0.0833 | 0.9893 | 0.9946 |
| 0.0543 | 7.2126 | 2748 | 0.9849 | 0.0892 | 0.9849 | 0.9924 |
| 0.0543 | 7.2178 | 2750 | 0.9807 | 0.0892 | 0.9807 | 0.9903 |
| 0.0543 | 7.2231 | 2752 | 0.9740 | 0.0892 | 0.9740 | 0.9869 |
| 0.0543 | 7.2283 | 2754 | 0.9709 | 0.0892 | 0.9709 | 0.9854 |
| 0.0543 | 7.2336 | 2756 | 0.9777 | 0.0892 | 0.9777 | 0.9888 |
| 0.0543 | 7.2388 | 2758 | 0.9824 | 0.0833 | 0.9824 | 0.9912 |
| 0.0543 | 7.2441 | 2760 | 0.9826 | 0.0833 | 0.9826 | 0.9912 |
| 0.0543 | 7.2493 | 2762 | 0.9723 | 0.0892 | 0.9723 | 0.9861 |
| 0.0543 | 7.2546 | 2764 | 0.9719 | 0.0833 | 0.9719 | 0.9858 |
| 0.0543 | 7.2598 | 2766 | 0.9689 | 0.0833 | 0.9689 | 0.9843 |
| 0.0543 | 7.2651 | 2768 | 0.9680 | 0.0833 | 0.9680 | 0.9839 |
| 0.0543 | 7.2703 | 2770 | 0.9724 | 0.0833 | 0.9724 | 0.9861 |
| 0.0543 | 7.2756 | 2772 | 0.9734 | 0.0833 | 0.9734 | 0.9866 |
| 0.0543 | 7.2808 | 2774 | 0.9791 | 0.0833 | 0.9791 | 0.9895 |
| 0.0543 | 7.2861 | 2776 | 0.9729 | 0.0892 | 0.9729 | 0.9864 |
| 0.0543 | 7.2913 | 2778 | 0.9732 | 0.0892 | 0.9732 | 0.9865 |
| 0.0543 | 7.2966 | 2780 | 0.9748 | 0.0892 | 0.9748 | 0.9873 |
| 0.0543 | 7.3018 | 2782 | 0.9798 | 0.0892 | 0.9798 | 0.9899 |
| 0.0543 | 7.3071 | 2784 | 0.9946 | 0.0833 | 0.9946 | 0.9973 |
| 0.0543 | 7.3123 | 2786 | 1.0237 | -0.0784 | 1.0237 | 1.0118 |
| 0.0543 | 7.3176 | 2788 | 1.0290 | -0.0784 | 1.0290 | 1.0144 |
| 0.0543 | 7.3228 | 2790 | 1.0097 | -0.0784 | 1.0097 | 1.0048 |
| 0.0543 | 7.3281 | 2792 | 0.9808 | 0.0833 | 0.9808 | 0.9903 |
| 0.0543 | 7.3333 | 2794 | 0.9638 | 0.0833 | 0.9638 | 0.9818 |
| 0.0543 | 7.3386 | 2796 | 0.9476 | 0.0892 | 0.9476 | 0.9734 |
| 0.0543 | 7.3438 | 2798 | 0.9428 | 0.0892 | 0.9428 | 0.9710 |
| 0.0543 | 7.3491 | 2800 | 0.9461 | 0.0892 | 0.9461 | 0.9727 |
| 0.0543 | 7.3543 | 2802 | 0.9509 | 0.0833 | 0.9509 | 0.9751 |
| 0.0543 | 7.3596 | 2804 | 0.9533 | 0.0833 | 0.9533 | 0.9764 |
| 0.0543 | 7.3648 | 2806 | 0.9509 | 0.0833 | 0.9509 | 0.9751 |
| 0.0543 | 7.3701 | 2808 | 0.9439 | 0.0833 | 0.9439 | 0.9715 |
| 0.0543 | 7.3753 | 2810 | 0.9428 | 0.0892 | 0.9428 | 0.9710 |
| 0.0543 | 7.3806 | 2812 | 0.9393 | 0.0833 | 0.9393 | 0.9692 |
| 0.0543 | 7.3858 | 2814 | 0.9480 | 0.0833 | 0.9480 | 0.9736 |
| 0.0543 | 7.3911 | 2816 | 0.9606 | 0.0833 | 0.9606 | 0.9801 |
| 0.0543 | 7.3963 | 2818 | 0.9634 | 0.0833 | 0.9634 | 0.9815 |
| 0.0543 | 7.4016 | 2820 | 0.9536 | 0.0833 | 0.9536 | 0.9765 |
| 0.0543 | 7.4068 | 2822 | 0.9444 | 0.0892 | 0.9444 | 0.9718 |
| 0.0543 | 7.4121 | 2824 | 0.9350 | 0.2568 | 0.9350 | 0.9670 |
| 0.0543 | 7.4173 | 2826 | 0.9296 | 0.2568 | 0.9296 | 0.9642 |
| 0.0543 | 7.4226 | 2828 | 0.9317 | 0.2568 | 0.9317 | 0.9653 |
| 0.0543 | 7.4278 | 2830 | 0.9369 | 0.0833 | 0.9369 | 0.9680 |
| 0.0543 | 7.4331 | 2832 | 0.9562 | 0.0833 | 0.9562 | 0.9779 |
| 0.0543 | 7.4383 | 2834 | 0.9698 | -0.0784 | 0.9698 | 0.9848 |
| 0.0543 | 7.4436 | 2836 | 0.9650 | -0.0784 | 0.9650 | 0.9823 |
| 0.0543 | 7.4488 | 2838 | 0.9649 | 0.0833 | 0.9649 | 0.9823 |
| 0.0543 | 7.4541 | 2840 | 0.9802 | 0.0833 | 0.9802 | 0.9901 |
| 0.0543 | 7.4593 | 2842 | 0.9911 | 0.0892 | 0.9911 | 0.9955 |
| 0.0543 | 7.4646 | 2844 | 0.9939 | 0.0892 | 0.9939 | 0.9970 |
| 0.0543 | 7.4698 | 2846 | 0.9914 | 0.0892 | 0.9914 | 0.9957 |
| 0.0543 | 7.4751 | 2848 | 0.9959 | 0.0892 | 0.9959 | 0.9980 |
| 0.0543 | 7.4803 | 2850 | 0.9955 | 0.0892 | 0.9955 | 0.9978 |
| 0.0543 | 7.4856 | 2852 | 1.0033 | 0.0892 | 1.0033 | 1.0016 |
| 0.0543 | 7.4908 | 2854 | 1.0089 | 0.0892 | 1.0089 | 1.0044 |
| 0.0543 | 7.4961 | 2856 | 1.0175 | 0.0892 | 1.0175 | 1.0087 |
| 0.0543 | 7.5013 | 2858 | 1.0236 | -0.0784 | 1.0236 | 1.0117 |
| 0.0543 | 7.5066 | 2860 | 1.0194 | -0.0784 | 1.0194 | 1.0097 |
| 0.0543 | 7.5118 | 2862 | 1.0047 | -0.0784 | 1.0047 | 1.0023 |
| 0.0543 | 7.5171 | 2864 | 0.9860 | 0.0833 | 0.9860 | 0.9930 |
| 0.0543 | 7.5223 | 2866 | 0.9812 | 0.0833 | 0.9812 | 0.9906 |
| 0.0543 | 7.5276 | 2868 | 0.9714 | 0.0833 | 0.9714 | 0.9856 |
| 0.0543 | 7.5328 | 2870 | 0.9644 | 0.0892 | 0.9644 | 0.9820 |
| 0.0543 | 7.5381 | 2872 | 0.9630 | 0.0892 | 0.9630 | 0.9813 |
| 0.0543 | 7.5433 | 2874 | 0.9592 | 0.2568 | 0.9592 | 0.9794 |
| 0.0543 | 7.5486 | 2876 | 0.9727 | 0.1316 | 0.9727 | 0.9863 |
| 0.0543 | 7.5538 | 2878 | 0.9883 | 0.1538 | 0.9883 | 0.9941 |
| 0.0543 | 7.5591 | 2880 | 0.9924 | 0.1316 | 0.9924 | 0.9962 |
| 0.0543 | 7.5643 | 2882 | 1.0016 | 0.0892 | 1.0016 | 1.0008 |
| 0.0543 | 7.5696 | 2884 | 1.0358 | 0.0892 | 1.0358 | 1.0177 |
| 0.0543 | 7.5748 | 2886 | 1.0930 | -0.0784 | 1.0930 | 1.0455 |
| 0.0543 | 7.5801 | 2888 | 1.1269 | 0.0494 | 1.1269 | 1.0615 |
| 0.0543 | 7.5853 | 2890 | 1.1232 | 0.0494 | 1.1232 | 1.0598 |
| 0.0543 | 7.5906 | 2892 | 1.0906 | -0.0784 | 1.0906 | 1.0443 |
| 0.0543 | 7.5958 | 2894 | 1.0476 | -0.0784 | 1.0476 | 1.0235 |
| 0.0543 | 7.6010 | 2896 | 1.0256 | 0.0892 | 1.0256 | 1.0127 |
| 0.0543 | 7.6063 | 2898 | 1.0160 | 0.0892 | 1.0160 | 1.0080 |
| 0.0543 | 7.6115 | 2900 | 1.0108 | 0.0892 | 1.0108 | 1.0054 |
| 0.0543 | 7.6168 | 2902 | 1.0043 | 0.0892 | 1.0043 | 1.0021 |
| 0.0543 | 7.6220 | 2904 | 1.0057 | 0.0892 | 1.0057 | 1.0029 |
| 0.0543 | 7.6273 | 2906 | 1.0003 | 0.0892 | 1.0003 | 1.0002 |
| 0.0543 | 7.6325 | 2908 | 0.9976 | 0.0892 | 0.9976 | 0.9988 |
| 0.0543 | 7.6378 | 2910 | 1.0017 | 0.0892 | 1.0017 | 1.0008 |
| 0.0543 | 7.6430 | 2912 | 1.0031 | 0.0833 | 1.0031 | 1.0015 |
| 0.0543 | 7.6483 | 2914 | 1.0075 | -0.0784 | 1.0074 | 1.0037 |
| 0.0543 | 7.6535 | 2916 | 0.9951 | 0.0833 | 0.9951 | 0.9976 |
| 0.0543 | 7.6588 | 2918 | 0.9729 | 0.0892 | 0.9729 | 0.9864 |
| 0.0543 | 7.6640 | 2920 | 0.9613 | 0.0892 | 0.9613 | 0.9804 |
| 0.0543 | 7.6693 | 2922 | 0.9596 | 0.2878 | 0.9596 | 0.9796 |
| 0.0543 | 7.6745 | 2924 | 0.9620 | 0.0892 | 0.9620 | 0.9808 |
| 0.0543 | 7.6798 | 2926 | 0.9689 | 0.0892 | 0.9689 | 0.9843 |
| 0.0543 | 7.6850 | 2928 | 0.9872 | 0.0833 | 0.9872 | 0.9936 |
| 0.0543 | 7.6903 | 2930 | 1.0077 | -0.0784 | 1.0077 | 1.0038 |
| 0.0543 | 7.6955 | 2932 | 1.0145 | -0.0784 | 1.0145 | 1.0072 |
| 0.0543 | 7.7008 | 2934 | 1.0184 | -0.0784 | 1.0184 | 1.0092 |
| 0.0543 | 7.7060 | 2936 | 1.0079 | 0.0892 | 1.0079 | 1.0040 |
| 0.0543 | 7.7113 | 2938 | 1.0013 | 0.0892 | 1.0013 | 1.0006 |
| 0.0543 | 7.7165 | 2940 | 0.9937 | 0.0892 | 0.9937 | 0.9969 |
| 0.0543 | 7.7218 | 2942 | 0.9869 | 0.0892 | 0.9869 | 0.9934 |
| 0.0543 | 7.7270 | 2944 | 0.9864 | 0.0892 | 0.9864 | 0.9932 |
| 0.0543 | 7.7323 | 2946 | 0.9949 | 0.0892 | 0.9949 | 0.9974 |
| 0.0543 | 7.7375 | 2948 | 1.0062 | 0.0892 | 1.0062 | 1.0031 |
| 0.0543 | 7.7428 | 2950 | 1.0049 | 0.0892 | 1.0049 | 1.0025 |
| 0.0543 | 7.7480 | 2952 | 0.9930 | 0.0892 | 0.9930 | 0.9965 |
| 0.0543 | 7.7533 | 2954 | 0.9812 | 0.0892 | 0.9812 | 0.9906 |
| 0.0543 | 7.7585 | 2956 | 0.9743 | 0.0892 | 0.9743 | 0.9871 |
| 0.0543 | 7.7638 | 2958 | 0.9762 | 0.0892 | 0.9762 | 0.9880 |
| 0.0543 | 7.7690 | 2960 | 0.9786 | 0.0833 | 0.9786 | 0.9892 |
| 0.0543 | 7.7743 | 2962 | 0.9838 | 0.0833 | 0.9838 | 0.9919 |
| 0.0543 | 7.7795 | 2964 | 0.9726 | 0.0833 | 0.9726 | 0.9862 |
| 0.0543 | 7.7848 | 2966 | 0.9581 | 0.0892 | 0.9581 | 0.9788 |
| 0.0543 | 7.7900 | 2968 | 0.9485 | 0.0892 | 0.9485 | 0.9739 |
| 0.0543 | 7.7953 | 2970 | 0.9503 | 0.0892 | 0.9503 | 0.9748 |
| 0.0543 | 7.8005 | 2972 | 0.9582 | 0.1081 | 0.9582 | 0.9789 |
| 0.0543 | 7.8058 | 2974 | 0.9622 | 0.0892 | 0.9622 | 0.9809 |
| 0.0543 | 7.8110 | 2976 | 0.9661 | 0.0892 | 0.9661 | 0.9829 |
| 0.0543 | 7.8163 | 2978 | 0.9745 | 0.0892 | 0.9745 | 0.9872 |
| 0.0543 | 7.8215 | 2980 | 0.9849 | 0.0892 | 0.9849 | 0.9924 |
| 0.0543 | 7.8268 | 2982 | 0.9925 | 0.0892 | 0.9925 | 0.9962 |
| 0.0543 | 7.8320 | 2984 | 1.0065 | -0.0784 | 1.0065 | 1.0032 |
| 0.0543 | 7.8373 | 2986 | 1.0075 | -0.0784 | 1.0075 | 1.0038 |
| 0.0543 | 7.8425 | 2988 | 0.9909 | 0.0833 | 0.9909 | 0.9954 |
| 0.0543 | 7.8478 | 2990 | 0.9749 | 0.0892 | 0.9749 | 0.9874 |
| 0.0543 | 7.8530 | 2992 | 0.9661 | 0.0892 | 0.9661 | 0.9829 |
| 0.0543 | 7.8583 | 2994 | 0.9620 | 0.0892 | 0.9620 | 0.9808 |
| 0.0543 | 7.8635 | 2996 | 0.9593 | 0.0892 | 0.9593 | 0.9795 |
| 0.0543 | 7.8688 | 2998 | 0.9636 | 0.0892 | 0.9636 | 0.9816 |
| 0.045 | 7.8740 | 3000 | 0.9774 | 0.0892 | 0.9774 | 0.9886 |
| 0.045 | 7.8793 | 3002 | 0.9949 | -0.0784 | 0.9949 | 0.9974 |
| 0.045 | 7.8845 | 3004 | 0.9976 | -0.0784 | 0.9976 | 0.9988 |
| 0.045 | 7.8898 | 3006 | 0.9845 | 0.0892 | 0.9845 | 0.9922 |
| 0.045 | 7.8950 | 3008 | 0.9701 | 0.0892 | 0.9701 | 0.9850 |
| 0.045 | 7.9003 | 3010 | 0.9571 | 0.0892 | 0.9571 | 0.9783 |
| 0.045 | 7.9055 | 3012 | 0.9471 | 0.0892 | 0.9471 | 0.9732 |
| 0.045 | 7.9108 | 3014 | 0.9457 | 0.0892 | 0.9457 | 0.9725 |
| 0.045 | 7.9160 | 3016 | 0.9545 | 0.0833 | 0.9545 | 0.9770 |
| 0.045 | 7.9213 | 3018 | 0.9674 | -0.0784 | 0.9674 | 0.9835 |
| 0.045 | 7.9265 | 3020 | 0.9742 | -0.0784 | 0.9742 | 0.9870 |
| 0.045 | 7.9318 | 3022 | 0.9707 | 0.0833 | 0.9707 | 0.9852 |
| 0.045 | 7.9370 | 3024 | 0.9561 | 0.0833 | 0.9561 | 0.9778 |
| 0.045 | 7.9423 | 3026 | 0.9518 | 0.0892 | 0.9518 | 0.9756 |
| 0.045 | 7.9475 | 3028 | 0.9535 | 0.0892 | 0.9535 | 0.9765 |
| 0.045 | 7.9528 | 3030 | 0.9642 | 0.0892 | 0.9642 | 0.9819 |
| 0.045 | 7.9580 | 3032 | 0.9737 | 0.0892 | 0.9737 | 0.9868 |
| 0.045 | 7.9633 | 3034 | 0.9768 | 0.0892 | 0.9768 | 0.9883 |
| 0.045 | 7.9685 | 3036 | 0.9822 | 0.0892 | 0.9822 | 0.9911 |
| 0.045 | 7.9738 | 3038 | 0.9797 | 0.0892 | 0.9797 | 0.9898 |
| 0.045 | 7.9790 | 3040 | 0.9748 | 0.0892 | 0.9748 | 0.9873 |
| 0.045 | 7.9843 | 3042 | 0.9657 | 0.0892 | 0.9657 | 0.9827 |
| 0.045 | 7.9895 | 3044 | 0.9657 | 0.0892 | 0.9657 | 0.9827 |
| 0.045 | 7.9948 | 3046 | 0.9652 | 0.0892 | 0.9652 | 0.9825 |
| 0.045 | 8.0 | 3048 | 0.9722 | 0.0833 | 0.9722 | 0.9860 |
| 0.045 | 8.0052 | 3050 | 0.9777 | 0.0833 | 0.9777 | 0.9888 |
| 0.045 | 8.0105 | 3052 | 0.9805 | -0.0784 | 0.9805 | 0.9902 |
| 0.045 | 8.0157 | 3054 | 0.9660 | 0.0892 | 0.9660 | 0.9829 |
| 0.045 | 8.0210 | 3056 | 0.9491 | 0.0892 | 0.9491 | 0.9742 |
| 0.045 | 8.0262 | 3058 | 0.9384 | 0.1081 | 0.9384 | 0.9687 |
| 0.045 | 8.0315 | 3060 | 0.9351 | 0.1081 | 0.9351 | 0.9670 |
| 0.045 | 8.0367 | 3062 | 0.9394 | 0.0892 | 0.9394 | 0.9692 |
| 0.045 | 8.0420 | 3064 | 0.9402 | 0.0892 | 0.9402 | 0.9697 |
| 0.045 | 8.0472 | 3066 | 0.9370 | 0.0892 | 0.9370 | 0.9680 |
| 0.045 | 8.0525 | 3068 | 0.9419 | 0.0892 | 0.9419 | 0.9705 |
| 0.045 | 8.0577 | 3070 | 0.9519 | 0.0833 | 0.9519 | 0.9757 |
| 0.045 | 8.0630 | 3072 | 0.9566 | 0.0833 | 0.9566 | 0.9781 |
| 0.045 | 8.0682 | 3074 | 0.9495 | 0.0892 | 0.9495 | 0.9744 |
| 0.045 | 8.0735 | 3076 | 0.9462 | 0.0892 | 0.9462 | 0.9727 |
| 0.045 | 8.0787 | 3078 | 0.9495 | 0.0892 | 0.9495 | 0.9744 |
| 0.045 | 8.0840 | 3080 | 0.9596 | 0.0892 | 0.9596 | 0.9796 |
| 0.045 | 8.0892 | 3082 | 0.9694 | 0.0892 | 0.9694 | 0.9846 |
| 0.045 | 8.0945 | 3084 | 0.9744 | 0.0892 | 0.9744 | 0.9871 |
| 0.045 | 8.0997 | 3086 | 0.9646 | 0.0892 | 0.9646 | 0.9821 |
| 0.045 | 8.1050 | 3088 | 0.9514 | 0.0892 | 0.9514 | 0.9754 |
| 0.045 | 8.1102 | 3090 | 0.9477 | 0.0892 | 0.9477 | 0.9735 |
| 0.045 | 8.1155 | 3092 | 0.9438 | 0.0892 | 0.9438 | 0.9715 |
| 0.045 | 8.1207 | 3094 | 0.9491 | 0.0892 | 0.9491 | 0.9742 |
| 0.045 | 8.1260 | 3096 | 0.9534 | 0.0892 | 0.9534 | 0.9764 |
| 0.045 | 8.1312 | 3098 | 0.9551 | 0.0892 | 0.9551 | 0.9773 |
| 0.045 | 8.1365 | 3100 | 0.9595 | 0.0892 | 0.9595 | 0.9796 |
| 0.045 | 8.1417 | 3102 | 0.9608 | 0.0892 | 0.9608 | 0.9802 |
| 0.045 | 8.1470 | 3104 | 0.9690 | 0.0892 | 0.9690 | 0.9844 |
| 0.045 | 8.1522 | 3106 | 0.9720 | 0.0892 | 0.9720 | 0.9859 |
| 0.045 | 8.1575 | 3108 | 0.9691 | 0.0892 | 0.9691 | 0.9844 |
| 0.045 | 8.1627 | 3110 | 0.9723 | 0.0892 | 0.9723 | 0.9860 |
| 0.045 | 8.1680 | 3112 | 0.9736 | 0.0892 | 0.9736 | 0.9867 |
| 0.045 | 8.1732 | 3114 | 0.9712 | 0.0892 | 0.9712 | 0.9855 |
| 0.045 | 8.1785 | 3116 | 0.9630 | 0.0892 | 0.9630 | 0.9813 |
| 0.045 | 8.1837 | 3118 | 0.9582 | 0.0892 | 0.9582 | 0.9789 |
| 0.045 | 8.1890 | 3120 | 0.9461 | 0.0892 | 0.9461 | 0.9727 |
| 0.045 | 8.1942 | 3122 | 0.9416 | 0.0892 | 0.9416 | 0.9704 |
| 0.045 | 8.1995 | 3124 | 0.9449 | 0.0833 | 0.9449 | 0.9721 |
| 0.045 | 8.2047 | 3126 | 0.9457 | 0.0833 | 0.9457 | 0.9725 |
| 0.045 | 8.2100 | 3128 | 0.9410 | 0.0833 | 0.9410 | 0.9701 |
| 0.045 | 8.2152 | 3130 | 0.9371 | 0.0892 | 0.9371 | 0.9681 |
| 0.045 | 8.2205 | 3132 | 0.9361 | 0.0892 | 0.9361 | 0.9675 |
| 0.045 | 8.2257 | 3134 | 0.9340 | 0.1081 | 0.9340 | 0.9664 |
| 0.045 | 8.2310 | 3136 | 0.9351 | 0.1081 | 0.9351 | 0.9670 |
| 0.045 | 8.2362 | 3138 | 0.9345 | 0.1295 | 0.9345 | 0.9667 |
| 0.045 | 8.2415 | 3140 | 0.9349 | 0.1081 | 0.9349 | 0.9669 |
| 0.045 | 8.2467 | 3142 | 0.9340 | 0.1081 | 0.9340 | 0.9664 |
| 0.045 | 8.2520 | 3144 | 0.9396 | 0.0892 | 0.9396 | 0.9693 |
| 0.045 | 8.2572 | 3146 | 0.9587 | 0.0833 | 0.9587 | 0.9791 |
| 0.045 | 8.2625 | 3148 | 0.9830 | -0.0784 | 0.9830 | 0.9915 |
| 0.045 | 8.2677 | 3150 | 0.9892 | -0.0784 | 0.9892 | 0.9946 |
| 0.045 | 8.2730 | 3152 | 0.9821 | -0.0784 | 0.9821 | 0.9910 |
| 0.045 | 8.2782 | 3154 | 0.9626 | 0.0833 | 0.9626 | 0.9811 |
| 0.045 | 8.2835 | 3156 | 0.9429 | 0.0833 | 0.9429 | 0.9710 |
| 0.045 | 8.2887 | 3158 | 0.9318 | 0.0892 | 0.9318 | 0.9653 |
| 0.045 | 8.2940 | 3160 | 0.9262 | 0.0892 | 0.9262 | 0.9624 |
| 0.045 | 8.2992 | 3162 | 0.9238 | 0.0892 | 0.9238 | 0.9611 |
| 0.045 | 8.3045 | 3164 | 0.9238 | 0.0892 | 0.9238 | 0.9611 |
| 0.045 | 8.3097 | 3166 | 0.9300 | 0.0892 | 0.9300 | 0.9644 |
| 0.045 | 8.3150 | 3168 | 0.9411 | 0.0833 | 0.9411 | 0.9701 |
| 0.045 | 8.3202 | 3170 | 0.9478 | 0.0833 | 0.9478 | 0.9736 |
| 0.045 | 8.3255 | 3172 | 0.9619 | 0.0833 | 0.9619 | 0.9808 |
| 0.045 | 8.3307 | 3174 | 0.9841 | -0.0784 | 0.9841 | 0.9920 |
| 0.045 | 8.3360 | 3176 | 0.9966 | -0.0784 | 0.9966 | 0.9983 |
| 0.045 | 8.3412 | 3178 | 0.9902 | -0.0784 | 0.9902 | 0.9951 |
| 0.045 | 8.3465 | 3180 | 0.9710 | 0.0833 | 0.9710 | 0.9854 |
| 0.045 | 8.3517 | 3182 | 0.9555 | 0.0833 | 0.9555 | 0.9775 |
| 0.045 | 8.3570 | 3184 | 0.9392 | 0.0833 | 0.9392 | 0.9691 |
| 0.045 | 8.3622 | 3186 | 0.9294 | 0.0892 | 0.9294 | 0.9640 |
| 0.045 | 8.3675 | 3188 | 0.9280 | 0.0892 | 0.9280 | 0.9633 |
| 0.045 | 8.3727 | 3190 | 0.9308 | 0.0892 | 0.9308 | 0.9648 |
| 0.045 | 8.3780 | 3192 | 0.9366 | 0.0833 | 0.9366 | 0.9678 |
| 0.045 | 8.3832 | 3194 | 0.9399 | 0.0892 | 0.9399 | 0.9695 |
| 0.045 | 8.3885 | 3196 | 0.9488 | 0.0892 | 0.9488 | 0.9741 |
| 0.045 | 8.3937 | 3198 | 0.9605 | 0.0833 | 0.9605 | 0.9800 |
| 0.045 | 8.3990 | 3200 | 0.9662 | 0.0833 | 0.9662 | 0.9829 |
| 0.045 | 8.4042 | 3202 | 0.9727 | 0.0833 | 0.9727 | 0.9862 |
| 0.045 | 8.4094 | 3204 | 0.9663 | 0.0833 | 0.9663 | 0.9830 |
| 0.045 | 8.4147 | 3206 | 0.9611 | 0.0892 | 0.9611 | 0.9803 |
| 0.045 | 8.4199 | 3208 | 0.9681 | 0.0892 | 0.9681 | 0.9839 |
| 0.045 | 8.4252 | 3210 | 0.9722 | 0.0892 | 0.9722 | 0.9860 |
| 0.045 | 8.4304 | 3212 | 0.9744 | 0.0892 | 0.9744 | 0.9871 |
| 0.045 | 8.4357 | 3214 | 0.9796 | 0.0892 | 0.9796 | 0.9898 |
| 0.045 | 8.4409 | 3216 | 0.9869 | 0.0892 | 0.9869 | 0.9934 |
| 0.045 | 8.4462 | 3218 | 0.9948 | 0.0892 | 0.9948 | 0.9974 |
| 0.045 | 8.4514 | 3220 | 0.9988 | 0.0892 | 0.9988 | 0.9994 |
| 0.045 | 8.4567 | 3222 | 1.0015 | 0.0892 | 1.0015 | 1.0007 |
| 0.045 | 8.4619 | 3224 | 1.0056 | 0.0892 | 1.0056 | 1.0028 |
| 0.045 | 8.4672 | 3226 | 1.0104 | 0.0892 | 1.0104 | 1.0052 |
| 0.045 | 8.4724 | 3228 | 1.0190 | 0.0892 | 1.0190 | 1.0095 |
| 0.045 | 8.4777 | 3230 | 1.0245 | 0.0892 | 1.0245 | 1.0122 |
| 0.045 | 8.4829 | 3232 | 1.0256 | 0.0892 | 1.0256 | 1.0127 |
| 0.045 | 8.4882 | 3234 | 1.0293 | 0.0892 | 1.0293 | 1.0146 |
| 0.045 | 8.4934 | 3236 | 1.0304 | 0.0892 | 1.0304 | 1.0151 |
| 0.045 | 8.4987 | 3238 | 1.0350 | 0.0892 | 1.0350 | 1.0173 |
| 0.045 | 8.5039 | 3240 | 1.0390 | -0.0784 | 1.0390 | 1.0193 |
| 0.045 | 8.5092 | 3242 | 1.0484 | -0.0784 | 1.0484 | 1.0239 |
| 0.045 | 8.5144 | 3244 | 1.0454 | -0.0784 | 1.0454 | 1.0225 |
| 0.045 | 8.5197 | 3246 | 1.0366 | -0.0784 | 1.0366 | 1.0181 |
| 0.045 | 8.5249 | 3248 | 1.0179 | -0.0784 | 1.0179 | 1.0089 |
| 0.045 | 8.5302 | 3250 | 0.9954 | 0.0833 | 0.9954 | 0.9977 |
| 0.045 | 8.5354 | 3252 | 0.9771 | 0.0833 | 0.9771 | 0.9885 |
| 0.045 | 8.5407 | 3254 | 0.9701 | 0.0892 | 0.9701 | 0.9849 |
| 0.045 | 8.5459 | 3256 | 0.9667 | 0.0892 | 0.9667 | 0.9832 |
| 0.045 | 8.5512 | 3258 | 0.9684 | 0.1538 | 0.9684 | 0.9841 |
| 0.045 | 8.5564 | 3260 | 0.9704 | 0.1538 | 0.9704 | 0.9851 |
| 0.045 | 8.5617 | 3262 | 0.9719 | 0.2878 | 0.9719 | 0.9858 |
| 0.045 | 8.5669 | 3264 | 0.9750 | 0.0892 | 0.9750 | 0.9874 |
| 0.045 | 8.5722 | 3266 | 0.9800 | 0.0892 | 0.9800 | 0.9899 |
| 0.045 | 8.5774 | 3268 | 0.9826 | 0.0892 | 0.9826 | 0.9913 |
| 0.045 | 8.5827 | 3270 | 0.9902 | 0.0892 | 0.9902 | 0.9951 |
| 0.045 | 8.5879 | 3272 | 0.9963 | 0.0892 | 0.9963 | 0.9982 |
| 0.045 | 8.5932 | 3274 | 0.9989 | 0.0892 | 0.9989 | 0.9994 |
| 0.045 | 8.5984 | 3276 | 1.0012 | 0.0892 | 1.0012 | 1.0006 |
| 0.045 | 8.6037 | 3278 | 1.0012 | 0.0892 | 1.0012 | 1.0006 |
| 0.045 | 8.6089 | 3280 | 0.9966 | 0.0892 | 0.9966 | 0.9983 |
| 0.045 | 8.6142 | 3282 | 0.9883 | 0.0892 | 0.9883 | 0.9941 |
| 0.045 | 8.6194 | 3284 | 0.9814 | 0.0892 | 0.9814 | 0.9906 |
| 0.045 | 8.6247 | 3286 | 0.9778 | 0.0892 | 0.9778 | 0.9888 |
| 0.045 | 8.6299 | 3288 | 0.9728 | 0.0892 | 0.9728 | 0.9863 |
| 0.045 | 8.6352 | 3290 | 0.9683 | 0.0833 | 0.9683 | 0.9840 |
| 0.045 | 8.6404 | 3292 | 0.9666 | 0.0833 | 0.9666 | 0.9832 |
| 0.045 | 8.6457 | 3294 | 0.9663 | 0.0833 | 0.9663 | 0.9830 |
| 0.045 | 8.6509 | 3296 | 0.9658 | 0.0833 | 0.9658 | 0.9828 |
| 0.045 | 8.6562 | 3298 | 0.9606 | 0.0833 | 0.9606 | 0.9801 |
| 0.045 | 8.6614 | 3300 | 0.9555 | 0.0833 | 0.9555 | 0.9775 |
| 0.045 | 8.6667 | 3302 | 0.9445 | 0.0833 | 0.9445 | 0.9719 |
| 0.045 | 8.6719 | 3304 | 0.9335 | 0.0833 | 0.9335 | 0.9662 |
| 0.045 | 8.6772 | 3306 | 0.9319 | 0.2878 | 0.9319 | 0.9653 |
| 0.045 | 8.6824 | 3308 | 0.9380 | 0.3231 | 0.9380 | 0.9685 |
| 0.045 | 8.6877 | 3310 | 0.9457 | 0.3231 | 0.9457 | 0.9725 |
| 0.045 | 8.6929 | 3312 | 0.9508 | 0.2878 | 0.9508 | 0.9751 |
| 0.045 | 8.6982 | 3314 | 0.9550 | 0.0892 | 0.9550 | 0.9772 |
| 0.045 | 8.7034 | 3316 | 0.9621 | 0.0892 | 0.9621 | 0.9809 |
| 0.045 | 8.7087 | 3318 | 0.9770 | 0.0833 | 0.9770 | 0.9885 |
| 0.045 | 8.7139 | 3320 | 0.9889 | 0.0833 | 0.9889 | 0.9945 |
| 0.045 | 8.7192 | 3322 | 1.0007 | 0.0833 | 1.0007 | 1.0003 |
| 0.045 | 8.7244 | 3324 | 1.0041 | 0.0833 | 1.0041 | 1.0021 |
| 0.045 | 8.7297 | 3326 | 1.0003 | 0.0833 | 1.0003 | 1.0002 |
| 0.045 | 8.7349 | 3328 | 0.9956 | 0.0892 | 0.9956 | 0.9978 |
| 0.045 | 8.7402 | 3330 | 0.9930 | 0.0892 | 0.9930 | 0.9965 |
| 0.045 | 8.7454 | 3332 | 0.9939 | 0.0892 | 0.9939 | 0.9970 |
| 0.045 | 8.7507 | 3334 | 0.9955 | 0.0892 | 0.9955 | 0.9978 |
| 0.045 | 8.7559 | 3336 | 0.9967 | 0.0892 | 0.9967 | 0.9983 |
| 0.045 | 8.7612 | 3338 | 1.0003 | 0.0892 | 1.0003 | 1.0001 |
| 0.045 | 8.7664 | 3340 | 1.0027 | 0.0833 | 1.0027 | 1.0013 |
| 0.045 | 8.7717 | 3342 | 1.0062 | 0.0833 | 1.0062 | 1.0031 |
| 0.045 | 8.7769 | 3344 | 1.0113 | 0.0833 | 1.0113 | 1.0056 |
| 0.045 | 8.7822 | 3346 | 1.0069 | 0.0833 | 1.0069 | 1.0034 |
| 0.045 | 8.7874 | 3348 | 0.9962 | 0.0833 | 0.9962 | 0.9981 |
| 0.045 | 8.7927 | 3350 | 0.9827 | 0.0833 | 0.9827 | 0.9913 |
| 0.045 | 8.7979 | 3352 | 0.9756 | 0.0892 | 0.9756 | 0.9877 |
| 0.045 | 8.8031 | 3354 | 0.9756 | 0.0892 | 0.9756 | 0.9877 |
| 0.045 | 8.8084 | 3356 | 0.9778 | 0.0892 | 0.9778 | 0.9888 |
| 0.045 | 8.8136 | 3358 | 0.9823 | 0.0892 | 0.9823 | 0.9911 |
| 0.045 | 8.8189 | 3360 | 0.9921 | 0.0892 | 0.9921 | 0.9960 |
| 0.045 | 8.8241 | 3362 | 0.9998 | 0.0892 | 0.9998 | 0.9999 |
| 0.045 | 8.8294 | 3364 | 1.0016 | 0.0892 | 1.0016 | 1.0008 |
| 0.045 | 8.8346 | 3366 | 1.0013 | 0.0892 | 1.0013 | 1.0006 |
| 0.045 | 8.8399 | 3368 | 1.0042 | 0.0892 | 1.0042 | 1.0021 |
| 0.045 | 8.8451 | 3370 | 1.0097 | 0.0892 | 1.0097 | 1.0048 |
| 0.045 | 8.8504 | 3372 | 1.0146 | 0.0892 | 1.0146 | 1.0073 |
| 0.045 | 8.8556 | 3374 | 1.0145 | 0.0833 | 1.0145 | 1.0072 |
| 0.045 | 8.8609 | 3376 | 1.0128 | 0.0833 | 1.0128 | 1.0064 |
| 0.045 | 8.8661 | 3378 | 1.0071 | 0.0833 | 1.0071 | 1.0036 |
| 0.045 | 8.8714 | 3380 | 1.0094 | 0.0833 | 1.0094 | 1.0047 |
| 0.045 | 8.8766 | 3382 | 1.0080 | 0.0833 | 1.0080 | 1.0040 |
| 0.045 | 8.8819 | 3384 | 1.0005 | 0.0833 | 1.0005 | 1.0002 |
| 0.045 | 8.8871 | 3386 | 0.9918 | 0.0833 | 0.9918 | 0.9959 |
| 0.045 | 8.8924 | 3388 | 0.9821 | 0.0833 | 0.9821 | 0.9910 |
| 0.045 | 8.8976 | 3390 | 0.9740 | 0.0833 | 0.9740 | 0.9869 |
| 0.045 | 8.9029 | 3392 | 0.9673 | 0.0833 | 0.9673 | 0.9835 |
| 0.045 | 8.9081 | 3394 | 0.9590 | 0.0892 | 0.9590 | 0.9793 |
| 0.045 | 8.9134 | 3396 | 0.9556 | 0.0892 | 0.9556 | 0.9775 |
| 0.045 | 8.9186 | 3398 | 0.9516 | 0.0833 | 0.9516 | 0.9755 |
| 0.045 | 8.9239 | 3400 | 0.9504 | 0.0833 | 0.9504 | 0.9749 |
| 0.045 | 8.9291 | 3402 | 0.9537 | 0.0833 | 0.9537 | 0.9766 |
| 0.045 | 8.9344 | 3404 | 0.9548 | 0.0833 | 0.9548 | 0.9772 |
| 0.045 | 8.9396 | 3406 | 0.9537 | 0.0833 | 0.9537 | 0.9766 |
| 0.045 | 8.9449 | 3408 | 0.9508 | 0.0833 | 0.9508 | 0.9751 |
| 0.045 | 8.9501 | 3410 | 0.9512 | 0.0833 | 0.9512 | 0.9753 |
| 0.045 | 8.9554 | 3412 | 0.9519 | 0.0833 | 0.9519 | 0.9756 |
| 0.045 | 8.9606 | 3414 | 0.9503 | 0.0833 | 0.9503 | 0.9748 |
| 0.045 | 8.9659 | 3416 | 0.9510 | 0.0833 | 0.9510 | 0.9752 |
| 0.045 | 8.9711 | 3418 | 0.9548 | 0.0833 | 0.9548 | 0.9771 |
| 0.045 | 8.9764 | 3420 | 0.9554 | 0.0833 | 0.9554 | 0.9774 |
| 0.045 | 8.9816 | 3422 | 0.9570 | 0.0833 | 0.9570 | 0.9783 |
| 0.045 | 8.9869 | 3424 | 0.9587 | 0.0833 | 0.9587 | 0.9791 |
| 0.045 | 8.9921 | 3426 | 0.9612 | 0.0833 | 0.9612 | 0.9804 |
| 0.045 | 8.9974 | 3428 | 0.9624 | 0.0833 | 0.9624 | 0.9810 |
| 0.045 | 9.0026 | 3430 | 0.9640 | 0.0833 | 0.9640 | 0.9819 |
| 0.045 | 9.0079 | 3432 | 0.9632 | 0.0833 | 0.9632 | 0.9814 |
| 0.045 | 9.0131 | 3434 | 0.9659 | 0.0892 | 0.9659 | 0.9828 |
| 0.045 | 9.0184 | 3436 | 0.9723 | 0.0892 | 0.9723 | 0.9861 |
| 0.045 | 9.0236 | 3438 | 0.9779 | 0.0892 | 0.9779 | 0.9889 |
| 0.045 | 9.0289 | 3440 | 0.9851 | 0.0833 | 0.9851 | 0.9925 |
| 0.045 | 9.0341 | 3442 | 0.9930 | 0.0833 | 0.9930 | 0.9965 |
| 0.045 | 9.0394 | 3444 | 0.9956 | 0.0833 | 0.9956 | 0.9978 |
| 0.045 | 9.0446 | 3446 | 0.9959 | 0.0833 | 0.9959 | 0.9979 |
| 0.045 | 9.0499 | 3448 | 0.9931 | 0.0833 | 0.9931 | 0.9966 |
| 0.045 | 9.0551 | 3450 | 0.9908 | 0.0833 | 0.9908 | 0.9954 |
| 0.045 | 9.0604 | 3452 | 0.9843 | 0.0833 | 0.9843 | 0.9921 |
| 0.045 | 9.0656 | 3454 | 0.9790 | 0.0833 | 0.9790 | 0.9894 |
| 0.045 | 9.0709 | 3456 | 0.9737 | 0.0892 | 0.9737 | 0.9868 |
| 0.045 | 9.0761 | 3458 | 0.9727 | 0.0892 | 0.9727 | 0.9863 |
| 0.045 | 9.0814 | 3460 | 0.9751 | 0.0892 | 0.9751 | 0.9875 |
| 0.045 | 9.0866 | 3462 | 0.9785 | 0.0892 | 0.9785 | 0.9892 |
| 0.045 | 9.0919 | 3464 | 0.9815 | 0.0892 | 0.9815 | 0.9907 |
| 0.045 | 9.0971 | 3466 | 0.9868 | 0.0833 | 0.9868 | 0.9934 |
| 0.045 | 9.1024 | 3468 | 0.9889 | 0.0833 | 0.9889 | 0.9944 |
| 0.045 | 9.1076 | 3470 | 0.9865 | 0.0833 | 0.9865 | 0.9932 |
| 0.045 | 9.1129 | 3472 | 0.9806 | 0.0892 | 0.9806 | 0.9902 |
| 0.045 | 9.1181 | 3474 | 0.9739 | 0.0892 | 0.9739 | 0.9869 |
| 0.045 | 9.1234 | 3476 | 0.9714 | 0.0892 | 0.9714 | 0.9856 |
| 0.045 | 9.1286 | 3478 | 0.9720 | 0.0892 | 0.9720 | 0.9859 |
| 0.045 | 9.1339 | 3480 | 0.9730 | 0.0892 | 0.9730 | 0.9864 |
| 0.045 | 9.1391 | 3482 | 0.9752 | 0.0892 | 0.9752 | 0.9875 |
| 0.045 | 9.1444 | 3484 | 0.9769 | 0.0892 | 0.9769 | 0.9884 |
| 0.045 | 9.1496 | 3486 | 0.9785 | 0.0892 | 0.9785 | 0.9892 |
| 0.045 | 9.1549 | 3488 | 0.9803 | 0.0892 | 0.9803 | 0.9901 |
| 0.045 | 9.1601 | 3490 | 0.9793 | 0.0892 | 0.9793 | 0.9896 |
| 0.045 | 9.1654 | 3492 | 0.9765 | 0.0892 | 0.9765 | 0.9882 |
| 0.045 | 9.1706 | 3494 | 0.9742 | 0.0892 | 0.9742 | 0.9870 |
| 0.045 | 9.1759 | 3496 | 0.9724 | 0.0892 | 0.9724 | 0.9861 |
| 0.045 | 9.1811 | 3498 | 0.9736 | 0.0892 | 0.9736 | 0.9867 |
| 0.0402 | 9.1864 | 3500 | 0.9737 | 0.0892 | 0.9737 | 0.9868 |
| 0.0402 | 9.1916 | 3502 | 0.9746 | 0.0892 | 0.9746 | 0.9872 |
| 0.0402 | 9.1969 | 3504 | 0.9777 | 0.0833 | 0.9777 | 0.9888 |
| 0.0402 | 9.2021 | 3506 | 0.9839 | 0.0833 | 0.9839 | 0.9919 |
| 0.0402 | 9.2073 | 3508 | 0.9886 | 0.0833 | 0.9886 | 0.9943 |
| 0.0402 | 9.2126 | 3510 | 0.9872 | 0.0833 | 0.9872 | 0.9936 |
| 0.0402 | 9.2178 | 3512 | 0.9872 | 0.0833 | 0.9872 | 0.9936 |
| 0.0402 | 9.2231 | 3514 | 0.9836 | 0.0833 | 0.9836 | 0.9918 |
| 0.0402 | 9.2283 | 3516 | 0.9803 | 0.0833 | 0.9803 | 0.9901 |
| 0.0402 | 9.2336 | 3518 | 0.9799 | 0.0833 | 0.9799 | 0.9899 |
| 0.0402 | 9.2388 | 3520 | 0.9798 | 0.0833 | 0.9798 | 0.9898 |
| 0.0402 | 9.2441 | 3522 | 0.9813 | 0.0833 | 0.9813 | 0.9906 |
| 0.0402 | 9.2493 | 3524 | 0.9862 | 0.0833 | 0.9862 | 0.9931 |
| 0.0402 | 9.2546 | 3526 | 0.9946 | 0.0833 | 0.9946 | 0.9973 |
| 0.0402 | 9.2598 | 3528 | 1.0017 | 0.0833 | 1.0017 | 1.0009 |
| 0.0402 | 9.2651 | 3530 | 1.0062 | 0.0833 | 1.0062 | 1.0031 |
| 0.0402 | 9.2703 | 3532 | 1.0123 | 0.0833 | 1.0123 | 1.0061 |
| 0.0402 | 9.2756 | 3534 | 1.0161 | 0.0833 | 1.0161 | 1.0080 |
| 0.0402 | 9.2808 | 3536 | 1.0199 | 0.0892 | 1.0199 | 1.0099 |
| 0.0402 | 9.2861 | 3538 | 1.0217 | 0.0892 | 1.0217 | 1.0108 |
| 0.0402 | 9.2913 | 3540 | 1.0211 | 0.0892 | 1.0211 | 1.0105 |
| 0.0402 | 9.2966 | 3542 | 1.0222 | -0.0353 | 1.0222 | 1.0110 |
| 0.0402 | 9.3018 | 3544 | 1.0244 | -0.0353 | 1.0244 | 1.0121 |
| 0.0402 | 9.3071 | 3546 | 1.0253 | -0.0353 | 1.0253 | 1.0126 |
| 0.0402 | 9.3123 | 3548 | 1.0264 | -0.0353 | 1.0264 | 1.0131 |
| 0.0402 | 9.3176 | 3550 | 1.0267 | -0.0353 | 1.0267 | 1.0133 |
| 0.0402 | 9.3228 | 3552 | 1.0269 | -0.0353 | 1.0269 | 1.0133 |
| 0.0402 | 9.3281 | 3554 | 1.0260 | -0.0353 | 1.0260 | 1.0129 |
| 0.0402 | 9.3333 | 3556 | 1.0256 | -0.0353 | 1.0256 | 1.0127 |
| 0.0402 | 9.3386 | 3558 | 1.0260 | -0.0353 | 1.0260 | 1.0129 |
| 0.0402 | 9.3438 | 3560 | 1.0272 | -0.0353 | 1.0272 | 1.0135 |
| 0.0402 | 9.3491 | 3562 | 1.0269 | -0.0353 | 1.0269 | 1.0134 |
| 0.0402 | 9.3543 | 3564 | 1.0250 | 0.0892 | 1.0250 | 1.0124 |
| 0.0402 | 9.3596 | 3566 | 1.0218 | 0.0892 | 1.0218 | 1.0109 |
| 0.0402 | 9.3648 | 3568 | 1.0200 | 0.0892 | 1.0200 | 1.0099 |
| 0.0402 | 9.3701 | 3570 | 1.0153 | 0.0892 | 1.0153 | 1.0076 |
| 0.0402 | 9.3753 | 3572 | 1.0100 | 0.0892 | 1.0100 | 1.0050 |
| 0.0402 | 9.3806 | 3574 | 1.0058 | 0.0892 | 1.0058 | 1.0029 |
| 0.0402 | 9.3858 | 3576 | 1.0011 | 0.0892 | 1.0011 | 1.0006 |
| 0.0402 | 9.3911 | 3578 | 0.9983 | 0.0892 | 0.9983 | 0.9991 |
| 0.0402 | 9.3963 | 3580 | 0.9961 | -0.0353 | 0.9961 | 0.9980 |
| 0.0402 | 9.4016 | 3582 | 0.9950 | -0.0353 | 0.9950 | 0.9975 |
| 0.0402 | 9.4068 | 3584 | 0.9962 | 0.0892 | 0.9962 | 0.9981 |
| 0.0402 | 9.4121 | 3586 | 0.9995 | 0.0892 | 0.9995 | 0.9998 |
| 0.0402 | 9.4173 | 3588 | 1.0028 | 0.0892 | 1.0028 | 1.0014 |
| 0.0402 | 9.4226 | 3590 | 1.0054 | 0.0892 | 1.0054 | 1.0027 |
| 0.0402 | 9.4278 | 3592 | 1.0089 | 0.0892 | 1.0089 | 1.0044 |
| 0.0402 | 9.4331 | 3594 | 1.0130 | 0.0892 | 1.0130 | 1.0065 |
| 0.0402 | 9.4383 | 3596 | 1.0167 | 0.0892 | 1.0167 | 1.0083 |
| 0.0402 | 9.4436 | 3598 | 1.0187 | 0.0892 | 1.0187 | 1.0093 |
| 0.0402 | 9.4488 | 3600 | 1.0197 | 0.0892 | 1.0197 | 1.0098 |
| 0.0402 | 9.4541 | 3602 | 1.0199 | 0.0833 | 1.0199 | 1.0099 |
| 0.0402 | 9.4593 | 3604 | 1.0181 | 0.0833 | 1.0181 | 1.0090 |
| 0.0402 | 9.4646 | 3606 | 1.0158 | 0.0833 | 1.0158 | 1.0079 |
| 0.0402 | 9.4698 | 3608 | 1.0116 | 0.0833 | 1.0116 | 1.0058 |
| 0.0402 | 9.4751 | 3610 | 1.0050 | 0.0833 | 1.0050 | 1.0025 |
| 0.0402 | 9.4803 | 3612 | 0.9979 | 0.0833 | 0.9979 | 0.9990 |
| 0.0402 | 9.4856 | 3614 | 0.9931 | 0.0833 | 0.9931 | 0.9966 |
| 0.0402 | 9.4908 | 3616 | 0.9880 | 0.0833 | 0.9880 | 0.9940 |
| 0.0402 | 9.4961 | 3618 | 0.9853 | 0.0833 | 0.9853 | 0.9926 |
| 0.0402 | 9.5013 | 3620 | 0.9837 | 0.0833 | 0.9837 | 0.9918 |
| 0.0402 | 9.5066 | 3622 | 0.9821 | 0.0833 | 0.9821 | 0.9910 |
| 0.0402 | 9.5118 | 3624 | 0.9787 | 0.0833 | 0.9787 | 0.9893 |
| 0.0402 | 9.5171 | 3626 | 0.9755 | 0.0833 | 0.9755 | 0.9877 |
| 0.0402 | 9.5223 | 3628 | 0.9743 | 0.0833 | 0.9743 | 0.9871 |
| 0.0402 | 9.5276 | 3630 | 0.9739 | 0.0833 | 0.9739 | 0.9869 |
| 0.0402 | 9.5328 | 3632 | 0.9737 | 0.0833 | 0.9737 | 0.9868 |
| 0.0402 | 9.5381 | 3634 | 0.9753 | 0.0833 | 0.9753 | 0.9876 |
| 0.0402 | 9.5433 | 3636 | 0.9783 | 0.0833 | 0.9783 | 0.9891 |
| 0.0402 | 9.5486 | 3638 | 0.9803 | 0.0833 | 0.9803 | 0.9901 |
| 0.0402 | 9.5538 | 3640 | 0.9805 | 0.0833 | 0.9805 | 0.9902 |
| 0.0402 | 9.5591 | 3642 | 0.9796 | 0.0833 | 0.9796 | 0.9897 |
| 0.0402 | 9.5643 | 3644 | 0.9802 | 0.0833 | 0.9802 | 0.9900 |
| 0.0402 | 9.5696 | 3646 | 0.9800 | 0.0833 | 0.9800 | 0.9900 |
| 0.0402 | 9.5748 | 3648 | 0.9795 | 0.0833 | 0.9795 | 0.9897 |
| 0.0402 | 9.5801 | 3650 | 0.9798 | 0.0892 | 0.9798 | 0.9898 |
| 0.0402 | 9.5853 | 3652 | 0.9812 | 0.0892 | 0.9812 | 0.9905 |
| 0.0402 | 9.5906 | 3654 | 0.9837 | 0.0892 | 0.9837 | 0.9918 |
| 0.0402 | 9.5958 | 3656 | 0.9865 | 0.0892 | 0.9865 | 0.9932 |
| 0.0402 | 9.6010 | 3658 | 0.9889 | 0.0833 | 0.9889 | 0.9944 |
| 0.0402 | 9.6063 | 3660 | 0.9920 | 0.0833 | 0.9920 | 0.9960 |
| 0.0402 | 9.6115 | 3662 | 0.9951 | 0.0833 | 0.9951 | 0.9975 |
| 0.0402 | 9.6168 | 3664 | 0.9971 | 0.0833 | 0.9971 | 0.9985 |
| 0.0402 | 9.6220 | 3666 | 0.9988 | 0.0833 | 0.9988 | 0.9994 |
| 0.0402 | 9.6273 | 3668 | 1.0015 | 0.0833 | 1.0015 | 1.0008 |
| 0.0402 | 9.6325 | 3670 | 1.0028 | 0.0833 | 1.0028 | 1.0014 |
| 0.0402 | 9.6378 | 3672 | 1.0046 | 0.0833 | 1.0046 | 1.0023 |
| 0.0402 | 9.6430 | 3674 | 1.0074 | 0.0833 | 1.0074 | 1.0037 |
| 0.0402 | 9.6483 | 3676 | 1.0090 | 0.0833 | 1.0090 | 1.0045 |
| 0.0402 | 9.6535 | 3678 | 1.0109 | 0.0833 | 1.0109 | 1.0054 |
| 0.0402 | 9.6588 | 3680 | 1.0129 | 0.0833 | 1.0129 | 1.0064 |
| 0.0402 | 9.6640 | 3682 | 1.0135 | 0.0833 | 1.0135 | 1.0067 |
| 0.0402 | 9.6693 | 3684 | 1.0128 | 0.0833 | 1.0128 | 1.0064 |
| 0.0402 | 9.6745 | 3686 | 1.0123 | 0.0833 | 1.0123 | 1.0061 |
| 0.0402 | 9.6798 | 3688 | 1.0126 | 0.0833 | 1.0126 | 1.0063 |
| 0.0402 | 9.6850 | 3690 | 1.0136 | 0.0833 | 1.0136 | 1.0068 |
| 0.0402 | 9.6903 | 3692 | 1.0142 | 0.0833 | 1.0142 | 1.0071 |
| 0.0402 | 9.6955 | 3694 | 1.0134 | 0.0833 | 1.0134 | 1.0067 |
| 0.0402 | 9.7008 | 3696 | 1.0114 | 0.0833 | 1.0114 | 1.0057 |
| 0.0402 | 9.7060 | 3698 | 1.0091 | 0.0833 | 1.0091 | 1.0045 |
| 0.0402 | 9.7113 | 3700 | 1.0061 | 0.0833 | 1.0061 | 1.0031 |
| 0.0402 | 9.7165 | 3702 | 1.0038 | 0.0892 | 1.0038 | 1.0019 |
| 0.0402 | 9.7218 | 3704 | 1.0016 | 0.0892 | 1.0016 | 1.0008 |
| 0.0402 | 9.7270 | 3706 | 0.9993 | 0.0892 | 0.9993 | 0.9996 |
| 0.0402 | 9.7323 | 3708 | 0.9976 | 0.0892 | 0.9976 | 0.9988 |
| 0.0402 | 9.7375 | 3710 | 0.9960 | 0.0892 | 0.9960 | 0.9980 |
| 0.0402 | 9.7428 | 3712 | 0.9948 | 0.0892 | 0.9948 | 0.9974 |
| 0.0402 | 9.7480 | 3714 | 0.9943 | 0.0892 | 0.9943 | 0.9972 |
| 0.0402 | 9.7533 | 3716 | 0.9935 | 0.0892 | 0.9935 | 0.9968 |
| 0.0402 | 9.7585 | 3718 | 0.9932 | 0.0892 | 0.9932 | 0.9966 |
| 0.0402 | 9.7638 | 3720 | 0.9921 | 0.0892 | 0.9921 | 0.9961 |
| 0.0402 | 9.7690 | 3722 | 0.9914 | 0.0833 | 0.9914 | 0.9957 |
| 0.0402 | 9.7743 | 3724 | 0.9913 | 0.0833 | 0.9913 | 0.9956 |
| 0.0402 | 9.7795 | 3726 | 0.9909 | 0.0833 | 0.9909 | 0.9954 |
| 0.0402 | 9.7848 | 3728 | 0.9912 | 0.0833 | 0.9912 | 0.9956 |
| 0.0402 | 9.7900 | 3730 | 0.9920 | 0.0833 | 0.9920 | 0.9960 |
| 0.0402 | 9.7953 | 3732 | 0.9921 | 0.0833 | 0.9921 | 0.9961 |
| 0.0402 | 9.8005 | 3734 | 0.9926 | 0.0833 | 0.9926 | 0.9963 |
| 0.0402 | 9.8058 | 3736 | 0.9925 | 0.0833 | 0.9925 | 0.9962 |
| 0.0402 | 9.8110 | 3738 | 0.9926 | 0.0833 | 0.9926 | 0.9963 |
| 0.0402 | 9.8163 | 3740 | 0.9929 | 0.0833 | 0.9929 | 0.9965 |
| 0.0402 | 9.8215 | 3742 | 0.9934 | 0.0833 | 0.9934 | 0.9967 |
| 0.0402 | 9.8268 | 3744 | 0.9934 | 0.0833 | 0.9934 | 0.9967 |
| 0.0402 | 9.8320 | 3746 | 0.9931 | 0.0833 | 0.9931 | 0.9966 |
| 0.0402 | 9.8373 | 3748 | 0.9927 | 0.0833 | 0.9927 | 0.9963 |
| 0.0402 | 9.8425 | 3750 | 0.9924 | 0.0833 | 0.9924 | 0.9962 |
| 0.0402 | 9.8478 | 3752 | 0.9925 | 0.0833 | 0.9925 | 0.9962 |
| 0.0402 | 9.8530 | 3754 | 0.9924 | 0.0833 | 0.9924 | 0.9962 |
| 0.0402 | 9.8583 | 3756 | 0.9928 | 0.0833 | 0.9928 | 0.9964 |
| 0.0402 | 9.8635 | 3758 | 0.9932 | 0.0833 | 0.9932 | 0.9966 |
| 0.0402 | 9.8688 | 3760 | 0.9938 | 0.0833 | 0.9938 | 0.9969 |
| 0.0402 | 9.8740 | 3762 | 0.9944 | 0.0833 | 0.9944 | 0.9972 |
| 0.0402 | 9.8793 | 3764 | 0.9948 | 0.0833 | 0.9948 | 0.9974 |
| 0.0402 | 9.8845 | 3766 | 0.9952 | 0.0833 | 0.9952 | 0.9976 |
| 0.0402 | 9.8898 | 3768 | 0.9951 | 0.0833 | 0.9951 | 0.9976 |
| 0.0402 | 9.8950 | 3770 | 0.9951 | 0.0833 | 0.9951 | 0.9976 |
| 0.0402 | 9.9003 | 3772 | 0.9948 | 0.0833 | 0.9948 | 0.9974 |
| 0.0402 | 9.9055 | 3774 | 0.9949 | 0.0833 | 0.9949 | 0.9974 |
| 0.0402 | 9.9108 | 3776 | 0.9950 | 0.0833 | 0.9950 | 0.9975 |
| 0.0402 | 9.9160 | 3778 | 0.9954 | 0.0833 | 0.9954 | 0.9977 |
| 0.0402 | 9.9213 | 3780 | 0.9957 | 0.0833 | 0.9957 | 0.9978 |
| 0.0402 | 9.9265 | 3782 | 0.9957 | 0.0833 | 0.9957 | 0.9978 |
| 0.0402 | 9.9318 | 3784 | 0.9956 | 0.0833 | 0.9956 | 0.9978 |
| 0.0402 | 9.9370 | 3786 | 0.9957 | 0.0892 | 0.9957 | 0.9978 |
| 0.0402 | 9.9423 | 3788 | 0.9959 | 0.0892 | 0.9959 | 0.9979 |
| 0.0402 | 9.9475 | 3790 | 0.9959 | 0.0892 | 0.9959 | 0.9980 |
| 0.0402 | 9.9528 | 3792 | 0.9959 | 0.0892 | 0.9959 | 0.9980 |
| 0.0402 | 9.9580 | 3794 | 0.9958 | 0.0892 | 0.9958 | 0.9979 |
| 0.0402 | 9.9633 | 3796 | 0.9957 | 0.0892 | 0.9957 | 0.9978 |
| 0.0402 | 9.9685 | 3798 | 0.9956 | 0.0892 | 0.9956 | 0.9978 |
| 0.0402 | 9.9738 | 3800 | 0.9955 | 0.0892 | 0.9955 | 0.9978 |
| 0.0402 | 9.9790 | 3802 | 0.9954 | 0.0892 | 0.9954 | 0.9977 |
| 0.0402 | 9.9843 | 3804 | 0.9953 | 0.0892 | 0.9953 | 0.9976 |
| 0.0402 | 9.9895 | 3806 | 0.9952 | 0.0892 | 0.9952 | 0.9976 |
| 0.0402 | 9.9948 | 3808 | 0.9951 | 0.0892 | 0.9951 | 0.9976 |
| 0.0402 | 10.0 | 3810 | 0.9951 | 0.0892 | 0.9951 | 0.9976 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
bu1/IQ_classification_112_new1_-10db
|
bu1
| 2024-11-20T11:01:37Z | 188 | 0 |
transformers
|
[
"transformers",
"safetensors",
"IQsignal_transformer",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-11-20T11:01:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.