modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 18:27:02
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 18:26:43
card
stringlengths
11
1.01M
technocrat3128/mistral7b-hub-push
technocrat3128
2024-02-29T05:34:22Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T05:30:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PavanDeepak/Topic_Classification
PavanDeepak
2024-02-29T05:29:41Z
111
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T12:51:23Z
--- license: mit --- ## BERT-based Text Classification Model This model is a fine-tuned version of the bert-base-uncased model, specifically adapted for text classification across a diverse set of categories. The model has been trained on a dataset collected from multiple sources, including the News Category Dataset on Kaggle and various other websites. The model classifies text into one of the following 12 categories: * Food * Videogames & Shows * Kids and fun * Homestyle * Travel * Health * Charity * Electronics & Technology * Sports * Cultural & Music * Education * Convenience The model has demonstrated robust performance with an accuracy of 0.721459, F1 score of 0.659451, precision of 0.707620, and recall of 0.635155. ## Model Architecture The model leverages the BertForSequenceClassification architecture, It has been fine-tuned on the aforementioned dataset, with the following key configuration parameters: * Hidden size: 768 * Number of attention heads: 12 * Number of hidden layers: 12 * Max position embeddings: 512 * Type vocab size: 2 * Vocab size: 30522 * The model uses the GELU activation function in its hidden layers and applies dropout with a probability of 0.1 to the attention probabilities to prevent overfitting. ## Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import numpy as np from scipy.special import expit MODEL = "PavanDeepak/Topic_Classification" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModelForSequenceClassification.from_pretrained(MODEL) class_mapping = model.config.id2label text = "I love chicken manchuria" tokens = tokenizer(text, return_tensors="pt") output = model(**tokens) scores = output.logits[0][0].detach().numpy() scores = expit(scores) predictions = (scores >= 0.5) * 1 for i in range(len(predictions)): if predictions[i]: print(class_mapping[i]) ``` ## Output: * Food * Videogames & Shows * Homestyle * Travel * Health
anhtranhong/fingpt-mt_llama2-7b_lora_with_fiqa-qa-v3
anhtranhong
2024-02-29T05:25:02Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-29T05:24:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lichang-Chen/ODIN_L1_O1
Lichang-Chen
2024-02-29T05:14:39Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "RM", "dataset:OpenAssistant/oasst1", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-05T13:33:36Z
--- license: mit datasets: - OpenAssistant/oasst1 tags: - RM --- The model trained with L=1 (Length loss weight) and O=1 (Orthogonal loss weight).
LarryAIDraw/laufen-10
LarryAIDraw
2024-02-29T05:05:49Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-29T04:58:52Z
--- license: creativeml-openrail-m --- https://civitai.com/models/324680/laufen-sousou-no-frieren
IntervitensInc/intv_gemma_mk1
IntervitensInc
2024-02-29T05:03:55Z
114
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T05:01:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kiriyamaX/twitter-aesthetic-v2-e10
kiriyamaX
2024-02-29T05:00:23Z
0
0
null
[ "region:us" ]
null
2024-02-29T04:59:30Z
backup of s3://quail-tmp/ft-sac-ava1-compound_score-ep10.pth
Yotto3108/koSoLAR_2way_3000_10epoch
Yotto3108
2024-02-29T04:58:43Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T07:40:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cookinai/titanbagel
cookinai
2024-02-29T04:54:42Z
50
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-23T05:33:27Z
--- license: cc-by-4.0 --- # experiment Finetune of Jon Durbin's Bagel DPO 7B with the Hercules 3.0 Dataset
aisuko/ft-facebook-bart-large-xsum-on-samsum
aisuko
2024-02-29T04:54:13Z
108
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "dataset:text-summatization", "base_model:facebook/bart-large-xsum", "base_model:finetune:facebook/bart-large-xsum", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-20T06:01:20Z
--- license: mit base_model: facebook/bart-large-xsum tags: - generated_from_trainer datasets: - text-summatization model-index: - name: ft-facebook-bart-large-xsum-on-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-facebook-bart-large-xsum-on-samsum This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the Samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 0.22 | 100 | 1.5051 | 49.5382 | 24.4845 | 40.7456 | 45.3284 | 26.1722 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.17.0 - Tokenizers 0.15.1
CatBarks/t5_esSEC2_4_tokenizer
CatBarks
2024-02-29T04:46:03Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-29T04:46:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CatBarks/t5_esSEC2_4
CatBarks
2024-02-29T04:46:02Z
48
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-29T04:44:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LarryAIDraw/yorktownii-lora-v1-32dim-20ep-naivae
LarryAIDraw
2024-02-29T04:45:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-02-26T15:36:08Z
--- license: creativeml-openrail-m --- https://civitai.com/models/18699?modelVersionId=22186
JC9027/finetuning-gpt2-finetunned
JC9027
2024-02-29T04:44:43Z
116
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-26T05:45:10Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer model-index: - name: finetuning-gpt2-finetunned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-gpt2-finetunned This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
akari000/roberta-dependency-max-4split
akari000
2024-02-29T04:44:09Z
117
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-31T14:32:49Z
This is the model with curriculum learning based on sentence complexity approximating language acquisition on [this paper](https://aclanthology.org/2023.conll-babylm.25/). # Citation ``` @inproceedings{oba-etal-2023-babylm, title = "{B}aby{LM} Challenge: Curriculum learning based on sentence complexity approximating language acquisition", author = "Oba, Miyu and Haga, Akari and Fukatsu, Akiyo and Oseki, Yohei", editor = "Warstadt, Alex and Mueller, Aaron and Choshen, Leshem and Wilcox, Ethan and Zhuang, Chengxu and Ciro, Juan and Mosquera, Rafael and Paranjabe, Bhargavi and Williams, Adina and Linzen, Tal and Cotterell, Ryan", booktitle = "Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.conll-babylm.25", doi = "10.18653/v1/2023.conll-babylm.25", pages = "290--297", } ```
jonquimbly/shap-e
jonquimbly
2024-02-29T04:36:13Z
4
1
diffusers
[ "diffusers", "text-to-image", "shap-e", "text-to-3d", "arxiv:2305.02463", "license:mit", "diffusers:ShapEPipeline", "region:us" ]
text-to-3d
2024-02-29T04:36:13Z
--- license: mit tags: - text-to-image - shap-e - diffusers pipeline_tag: text-to-3d --- # Shap-E Shap-E introduces a diffusion process that can generate a 3D image from a text prompt. It was introduced in [Shap-E: Generating Conditional 3D Implicit Functions](https://arxiv.org/abs/2305.02463) by Heewoo Jun and Alex Nichol from OpenAI. Original repository of Shap-E can be found here: https://github.com/openai/shap-e. _The authors of Shap-E didn't author this model card. They provide a separate model card [here](https://github.com/openai/shap-e/blob/main/model-card.md)._ ## Introduction The abstract of the Shap-E paper: *We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. We release model weights, inference code, and samples at [this https URL](https://github.com/openai/shap-e).* ## Released checkpoints The authors released the following checkpoints: * [openai/shap-e](https://hf.co/openai/shap-e): produces a 3D image from a text input prompt * [openai/shap-e-img2img](https://hf.co/openai/shap-e-img2img): samples a 3D image from synthetic 2D image ## Usage examples in 🧨 diffusers First make sure you have installed all the dependencies: ```bash pip install transformers accelerate -q pip install git+https://github.com/huggingface/diffusers@@shap-ee ``` Once the dependencies are installed, use the code below: ```python import torch from diffusers import ShapEPipeline from diffusers.utils import export_to_gif ckpt_id = "openai/shap-e" pipe = ShapEPipeline.from_pretrained(repo).to("cuda") guidance_scale = 15.0 prompt = "a shark" images = pipe( prompt, guidance_scale=guidance_scale, num_inference_steps=64, size=256, ).images gif_path = export_to_gif(images, "shark_3d.gif") ``` ## Results <table> <tbody> <tr> <td align="center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/bird_3d.gif" alt="a bird"> </td> <td align="center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/shark_3d.gif" alt="a shark"> </td align="center"> <td align="center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/veg_3d.gif" alt="A bowl of vegetables"> </td> </tr> <tr> <td align="center">A bird</td> <td align="center">A shark</td> <td align="center">A bowl of vegetables</td> </tr> </tr> </tbody> <table> ## Training details Refer to the [original paper](https://arxiv.org/abs/2305.02463). ## Known limitations and potential biases Refer to the [original model card](https://github.com/openai/shap-e/blob/main/model-card.md). ## Citation ```bibtex @misc{jun2023shape, title={Shap-E: Generating Conditional 3D Implicit Functions}, author={Heewoo Jun and Alex Nichol}, year={2023}, eprint={2305.02463}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
suthanhcong/bert-base-japanese-v3-marc_ja
suthanhcong
2024-02-29T04:33:47Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-29T04:33:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
andersonarc/maid-yuzu-v8-alter-GGUF
andersonarc
2024-02-29T04:11:57Z
14
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss", "base_model:merge:NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss", "base_model:cognitivecomputations/dolphin-2.7-mixtral-8x7b", "base_model:merge:cognitivecomputations/dolphin-2.7-mixtral-8x7b", "base_model:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:merge:mistralai/Mixtral-8x7B-v0.1", "base_model:smelborp/MixtralOrochi8x7B", "base_model:merge:smelborp/MixtralOrochi8x7B", "base_model:ycros/BagelMIsteryTour-v2-8x7B", "base_model:merge:ycros/BagelMIsteryTour-v2-8x7B", "endpoints_compatible", "region:us" ]
null
2024-02-29T03:01:38Z
--- base_model: - mistralai/Mixtral-8x7B-v0.1 - mistralai/Mixtral-8x7B-Instruct-v0.1 - jondurbin/bagel-dpo-8x7b-v0.2 - cognitivecomputations/dolphin-2.7-mixtral-8x7b - NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss - ycros/BagelMIsteryTour-v2-8x7B - smelborp/MixtralOrochi8x7B library_name: transformers tags: - mergekit - merge --- # maid-yuzu-v8-alter-GGUF Quantized from https://huggingface.co/rhplus0831/maid-yuzu-v8-alter. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). v7's approach worked better than I thought, so I tried something even weirder as a test. I don't think a proper model will come out, but I'm curious about the results. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. This models were merged using the SLERP method in the following order: maid-yuzu-v8-base: mistralai/Mixtral-8x7B-v0.1 + mistralai/Mixtral-8x7B-Instruct-v0.1 = 0.5 maid-yuzu-v8-step1: above + jondurbin/bagel-dpo-8x7b-v0.2 = 0.25 maid-yuzu-v8-step2: above + cognitivecomputations/dolphin-2.7-mixtral-8x7b = 0.25 maid-yuzu-v8-step3: above + NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss = 0.25 maid-yuzu-v8-step4-alter: above + ycros/BagelMIsteryTour-v2-8x7B = 0.5 maid-yuzu-v8-alter: above + smelborp/MixtralOrochi8x7B = 0.5 ### Models Merged The following models were included in the merge: * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) * ../maid-yuzu-v8-step4-alter ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: ../maid-yuzu-v8-step4-alter dtype: bfloat16 merge_method: slerp parameters: t: - value: 0.5 slices: - sources: - layer_range: [0, 32] model: model: path: ../maid-yuzu-v8-step4-alter - layer_range: [0, 32] model: model: path: smelborp/MixtralOrochi8x7B ```
KVNAditya/drl__u6__prd
KVNAditya
2024-02-29T03:56:46Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-29T03:51:38Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.24 +/- 0.09 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AIMindaeng/hansoldeco-KGOAT
AIMindaeng
2024-02-29T03:52:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-29T03:52:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nagyadam0616/mistral-x-twitter-290
nagyadam0616
2024-02-29T03:50:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-29T03:50:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ArthurZ/mamba-790m
ArthurZ
2024-02-29T03:47:39Z
373
0
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-19T12:21:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ArthurZ/mamba-1.4b
ArthurZ
2024-02-29T03:45:26Z
348
0
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-19T12:23:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flexthink/ttseval-wavlm-transformer
flexthink
2024-02-29T03:38:39Z
4
0
null
[ "arxiv:2106.04624", "license:apache-2.0", "region:us" ]
null
2024-02-28T19:32:52Z
--- license: apache-2.0 --- -- language: "en" thumbnail: tags: - speechbrain - tts - mos license: "apache-2.0" datasets: - somos metrics: - Pearson R inference: false --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # TTS MOS estimation with WavLM for LJSpeech This repository provides all the necessary tools to perform TTS quality evaluation using a WavLM finetuning model. The model attempts to predict the Mean Opinion Score, i.e. averaged human ratings on a scale from 1 to 5. The model was trained using the SOMOS dataset from Samsung: https://paperswithcode.com/dataset/somos For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance on SOMOS test set is: | Release | Pearson R | |:----------:|:--------------:| | 2023-02-29 | 0.904 | ## Pipeline description This system is composed of an WavLM model and a simple forward transformer followed by statistical pooling. The model was trained by first pre-conditioning the model on a simple classifier that attempts to determine whether the rating is above a certain threshold and then fine-tuned on the regression task. ## Install SpeechBrain First of all, please install the **development** version of SpeechBrain with the following command: ``` pip install git+https://github.com/speechbrain/speechbrain.git@$develop ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform MOS estimation The `RegressionModelSpeechEvaluator` interface is used as a high-level wrapper for the MOS estimation task ```python from speechbrain.inference.eval import RegressionModelSpeechEvaluator source = "flexthink/ttseval-wavlm-transformer" eval = RegressionModelSpeechEvaluator.from_hparams(source) file_names = [ "LJ002-0181_110.wav", "booksent_2012_0005_001.wav", ] prediction = eval.evaluate_files(file_names) ``` The prediction is a `SpeechEvaluationResult` named tuple instance where `prediction.score` and `predictions.details["score"]` both indicate the predicted Mean Opinion Score. ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/SOMOS/ttseval python train.py hparams/train.yaml --data_folder=your_data_folder ``` ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
nagyadam0616/mistral-x-twitter-v2
nagyadam0616
2024-02-29T03:38:33Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-29T03:38:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Coletomyo/whisper-Mongolian
Coletomyo
2024-02-29T03:31:02Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-29T02:07:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lunarsylph/gemmacell_v6
lunarsylph
2024-02-29T03:20:30Z
114
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T03:08:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
danwils/Alasv2-Seallm-7B-v2
danwils
2024-02-29T03:17:10Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T03:08:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
athan37/sd-class-butterflies-32
athan37
2024-02-29T03:17:03Z
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-02-29T03:16:51Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('athan37/sd-class-butterflies-32') image = pipeline().images[0] image ```
Jayamurugan/SahaAI
Jayamurugan
2024-02-29T03:13:31Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T03:06:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5
cogbuji
2024-02-29T02:56:46Z
16
5
mlx
[ "mlx", "safetensors", "mistral", "medical", "health", "instruct", "finetune", "chatml", "text-generation", "conversational", "en", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B", "license:mit", "region:us" ]
text-generation
2024-02-20T01:35:24Z
--- base_model: teknium/OpenHermes-2.5-Mistral-7B license: mit language: - en model_creator: Chime Ogbuji library_name: mlx model_name: Mr-Grammatology-clinical-problems-Mistral-7B-0.5 pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' tags: - mlx - medical - health - mistral - instruct - finetune - chatml --- # Mr-Grammatology-clinical-problems-Mistral-7B-0.5 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/651d96a3e8c4c2ebaafc1e7d/uyiryuBhU4y62f4CRxabO.png) The name of the model is a homage to Fela Kuti's song __Mr Grammarticalogy-Lisationalsim Is The Boss__ released on the B-side of his 1976 LP [Excuse O](https://www.discogs.com/release/3149841-Fela-And-The-Africa-70-Excuse-O). It is a 16/32 QLoRa all linear layers finetune of [teknium/OpenHermes-2.5-Mistral-7B](/teknium/OpenHermes-2.5-Mistral-7B) using [controlled natural language (CNL) phrases](https://github.com/chimezie/django-snomed-ct#controlled-natural-language) generated from the September 23rd release of [SNOMED CT United States Edition](https://www.snomed.org/snomed-ct/Use-SNOMED-CT). The general idea is described in [Domain-Specific Biomedical Ontologies, RALM, and Generative Medical Expert Systems](https://chimezie.medium.com/biomedical-ontology-retrieval-augmented-language-models-using-django-snomed-ct-and-ogbujipt-dfa0d0b150d8). It is an experimental model for non-production environments to test how generative AI systems can be trained for use in various medical informatics scenarios. The original model was converted to MLX format, quantized, and then subject to continued pretraining using all the active domain-expert text definitions available in SNOMED-CT at a constant learning rate of 1e-5 using [mlx_lm's LoRa finetuning functionality](https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/LORA.md) with 16 LoRa layers. It was then trained on a dataset of 336,762 records of medical terminology **definition instructions** generated from SNOMED-CT using a fork of [django-snomed-ct](https://github.com/chimezie/django-snomed-ct). These definition instructions were generated from the **disorder**, **finding**, **morphological abnormality**, and **situation** hierarchies in SNOMED-CT. This training step was done using [mlx-tuning-fork](https://github.com/chimezie/mlx-tuning-fork) through 42,096 training iterations, with a batch size of 8 at a time, using LoRa on all linear layers. There were 51,082 records of more granular definition instructions, part of which were incorporated into the training dataset. However, 40% were kept aside for validation. ## Use with mlx ```bash pip install mlx-lm ``` Download and convert. ```bash $ python -m mlx_lm.convert --hf-path cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5 \ --mlx-path /path/to/mlx/model ``` Generate from prompts in commandline (see [Generate Text with LLMs and MLX](https://github.com/ml-explore/mlx-examples/tree/main/llms) for more options ) ```bash $ python -m mlx_lm.generate --prompt "How is Cardiomyopathy characterized in form?" \ --temp .4 -m 300 --model /path/to/mlx/model --seed 4 ``` ``` Prompt: <|im_start|>user How is Cardiomyopathy characterized in form?<|im_end|> <|im_start|>assistant Cardiomyopathy is characterized in form by a morphologically abnormal structure located in a myocardium structure ``` ## Example of use of 1-shot description prompting Using mlx-tuning-fork with OgbujiPts word looms to construct 1-shot example of description prompting. ```bash $ python -m mlx_tuning_fork.training -nt 1200 -t .4 --loom-file=sct_prompt.toml -f chatml config.yaml ``` ``` Prompt: <|im_start|>system Give detailed responses. Use critical thinking and think step-by-step. Don't make up answers if you don't know. Use SNOMED-CT definitions as much as possible, describing characterizations in form of morphological abnormalities involved in diagnoses (their signs), their etiology (causes/basis), their related findings. ## Question ## What are the signs and etiology of Skin ulcer due to diabetes mellitus? ## Answer ## SNOMED-CT defines Skin ulcer due to diabetes mellitus as characterized in form by an ulcer located in a skin structure. Ulcer is a mophologic abnormality. The cause or origin of a disorder is part of its etiology. Skin ulcer due to diabetes mellitus is caused by diabetes mellitus. So, the etiology of Skin ulcer due to diabetes mellitus is diabetes mellitus. Infected ulcer of skin is an infectious process characterized in form by an ulcer located in a skin structure. Since the signs of Skin ulcer due to diabetes mellitus is also an ulcer, Infected ulcer of skin is an example of a disorder with the same signs as Skin ulcer due to diabetes mellitus. <|im_end|> <|im_start|>user Take a deep breath and work on this problem step by step. ## Question ## The patient has Irritable bowel syndrome. Summarize the the biological processes and signs (characterizations in form) involved in the patient's diagnoses. Give examples of other disorders with the same signs and etiology. <|im_end|> <|im_start|>assistant Irritable bowel syndrome (SNOMED-CT definition: Irritable bowel syndrome) is a functional disorder of the gastrointestinal tract, characterized in form by abnormal bowel movements. It is a clinical diagnosis, and its signs and symptoms include abdominal pain, bloating, and altered bowel movements (diarrhea, constipation, or both). Etiology: Irritable bowel syndrome is believed to be caused by factors including abnormal gut motility, gastrointestinal dysrhythmias, and psychological factors. Signs: Examples of signs of the disorder include changes in stool consistency and abdominal pain. Irritable bowel syndrome is a functional bowel disorder, which is a category of gastrointestinal disorders with similar signs and symptoms. Examples of disorders with the same signs and etiology include: 1. Functional dyspepsia: It is characterized by an upper abdominal pain or discomfort and has a similar etiology as irritable bowel syndrome. It is a functional disorder of the stomach and small intestine, and its signs include epigastric pain and discomfort. 2. Chronic idiopathic constipation: It is characterized by chronic constipation and has a similar etiology as irritable bowel syndrome. It is a functional disorder of the colon ========== Prompt: 447.658 tokens-per-sec ```
fzzhang/mistral_gsm8k_s_prod_fullS_unquantized_merged
fzzhang
2024-02-29T02:52:05Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T02:45:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CatBarks/t5_esSEC2_2
CatBarks
2024-02-29T02:52:02Z
48
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-02-29T02:49:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fzzhang/mistral_gsm8k_s_prod_fullS
fzzhang
2024-02-29T02:40:11Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-02-28T19:11:32Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral_gsm8k_s_prod_fullS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_gsm8k_s_prod_fullS This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.0
Kquant03/Nanashi-2x7B-bf16
Kquant03
2024-02-29T02:37:24Z
49
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "merge", "en", "arxiv:2101.03961", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T05:40:39Z
--- license: apache-2.0 language: - en tags: - moe - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/3QnSTfLM9b0CgOIIMMv-t.png) # Nameless. [Join our Discord!](https://discord.gg/ZgU79QDnE2) A frankenMoE using zero prompting at all. The basic config looks like this...(detailed version is in the files and versions): - [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) - base - [alnrg2arg/blockchainlabs_test3_seminar](https://huggingface.co/alnrg2arg/blockchainlabs_test3_seminar) - expert #1 - [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) - expert #2 # I'll upload it to evals if Azathoth-16x7B does good, since this has a similar idea behind it. # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)" ### (from the MistralAI papers...click the quoted question above to navigate to it directly.) The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png) Switch Layer MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961) So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter. ## "Wait...but you called this a frankenMoE?" The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
tisage/CodeQualityLC100
tisage
2024-02-29T02:34:47Z
0
0
tf-keras
[ "tf-keras", "TensorFlow", "BERT", "Transformer", "Classification", "Regression", "license:mit", "region:us" ]
null
2024-01-26T15:01:42Z
--- license: mit tags: - TensorFlow - BERT - Transformer - Classification - Regression --- # Code Qualiy Evaluation Dataset Welcome to the repository for our research paper: T. Wang and Z. Chen, "Analyzing Code Text Strings for Code Evaluation," 2023 IEEE International Conference on Big Data (BigData), Sorrento, Italy, 2023, pp. 5619-5628, doi: 10.1109/BigData59044.2023.10386406. ## Contents This repository contains the following: - Fine-tuned Model - Dataset (https://github.com/tisage/codeQuality) - License ## Model Info There are three BERT models, each fine-tuned on a dataset of 70K Python 3 solutions submitted by users for problems #1 through #100 on LeetCode: - `bert_lc100_hp25`: This model classifies code based on the 25th percentile as its threshold. It is designed for identifying lower quartile code solutions in terms of quality or performance. - `bert_lc100_hp50`: Operating with a median-based approach, this model uses the 50th percentile as its classification threshold. It is suitable for general assessments, providing a balanced view of code quality. - `bert_lc100_regression`: Unlike the others, this is a regression model. It provides a nuanced prediction of the overall code quality score, offering a more detailed evaluation compared to the binary classification approach. - `bert_lc100_regression_v2`: similar to `bert_lc100_regression` model, the correctness score is calculated using more restricted rule `==` instead of similarity. ## Model Usage **Installation** First, ensure you have the latest version of the tf-models-official package. You can install it using the following command: ``` pip install -q tf-models-official ``` **Loading the Model** To utilize the bert_lc100_regression model within TensorFlow, follow these steps: ``` import tensorflow as tf import tensorflow_text as text model = tf.keras.models.load_model('saved_model/bert_lc100_regression/', compile=False) ``` **Making Predictions** To assess the quality of code, given that `X_test` contains a list of code strings, use the model to predict as follows: ``` y_pred = model.predict(X_test) ``` ## Reference If you found the dataset useful in your research or applications, please cite using the following BibTeX: ``` @INPROCEEDINGS{10386406, author={Wang, Tianyu and Chen, Zhixiong}, booktitle={2023 IEEE International Conference on Big Data (BigData)}, title={Analyzing Code Text Strings for Code Evaluation}, year={2023}, volume={}, number={}, pages={5619-5628}, keywords={Measurement;Deep learning;Codes;Bidirectional control;Organizations;Transformers;Software;code assessment;code annotation;deep learning;nature language processing;software assurance;code security}, doi={10.1109/BigData59044.2023.10386406} } ```
Kquant03/Cognito-2x7B-bf16
Kquant03
2024-02-29T02:32:00Z
53
5
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "merge", "en", "arxiv:2101.03961", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-04T05:39:54Z
--- license: apache-2.0 language: - en tags: - moe - merge --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/645cfe4603fc86c46b3e46d1/YIxTRkaxXfPL9NHM6ZL76.jpeg) # Kquant03/Cognito-2x7B-bf16 Knowledge is **power**. [Join our Discord!](https://discord.gg/ZgU79QDnE2) A frankenMoE using zero positive prompting for the config. (designed by [NeuralNovel](https://huggingface.co/NeuralNovel)) The basic config looks like this...(detailed version is in the files and versions): - [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) - base - [alnrg2arg/blockchainlabs_test3_seminar](https://huggingface.co/alnrg2arg/blockchainlabs_test3_seminar) - expert #1 - [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) - expert #2 # Uploading it to Evals, soon. # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)" ### (from the MistralAI papers...click the quoted question above to navigate to it directly.) The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png) Switch Layer MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961) So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter. ## "Wait...but you called this a frankenMoE?" The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
Kquant03/Umbra-MoE-4x10.7-GGUF
Kquant03
2024-02-29T02:31:46Z
79
5
null
[ "gguf", "moe", "merge", "mergekit", "dataset:kodonho/SolarM-SakuraSolar-SLERP", "dataset:Sao10K/Sensualize-Solar-10.7B", "dataset:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "dataset:fblgit/UNA-SOLAR-10.7B-Instruct-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-21T10:04:26Z
--- license: apache-2.0 tags: - moe - merge - mergekit datasets: - kodonho/SolarM-SakuraSolar-SLERP - Sao10K/Sensualize-Solar-10.7B - NousResearch/Nous-Hermes-2-SOLAR-10.7B - fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/TN6IeT8hHjMCVljzVn2Fs.png) # Umbra-MoE-4x10.7 Umbra is an off shoot of the [Lumosia Series] with a Focus in General Knowledge and RP/ERP This model was built around the idea someone wanted a General Assiatant that could also tell Stories/RP/ERP when wanted. This is a very experimental model. It's a combination MoE of Solar models, the models selected are personal favorites. base context is 4k but it stays coherent up to 16k Please let me know how the model works for you. [Join our Discord!](https://discord.gg/ZgU79QDnE2) ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Q2_K Tiny](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 12 GB| 14 GB | smallest, significant quality loss - not recommended for most purposes | | [Q3_K_M](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 15.7 GB| 17.7 GB | very small, high quality loss | | [Q4_0](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 20.3 GB| 22.3 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Q4_K_M](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 20.4 GB| 22.4 GB | medium, balanced quality - recommended | | [Q5_0](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 24.8 GB| 26.8 GB | legacy; large, balanced quality | | [Q5_K_M](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | 24.9 GB| 26.9 GB | large, balanced quality - recommended | | [Q6 XL](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 29.6 GB| 31.6 GB | very large, extremely minor degradation | | [Q8 XXL](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 38.4 GB| 40.4 GB | very large, extremely minor degradation - not recommended | Template: ChatML ``` ### System: ### USER:{prompt} ### Assistant: ``` Settings: ``` Temp: 1.0 min-p: 0.02-0.1 ``` ## Evals: * Avg: 73.4 * ARC: 70.31 * HellaSwag: 87.81 * MMLU: 66.42 * T-QA: 67.82 * Winogrande: 83.27 * GSM8K: 64.75 ## Examples: ``` User: Tell me an interesting fact. Umbra: The Amazon Rainforest is home to an astounding variety of life forms. It encompasses over five million square kilometers across nine South American countries and holds approximately one-third of all known species on Earth. This biodiversity hotspot supports more than 390 billion individual trees representing 16,000 species. Over two million insect species live here alongside thousands of bird, mammal, amphibian, and reptile species. In this dense forest, indigenous communities have developed intricate knowledge systems that allow them to thrive sustainably within their surroundings. ``` ``` User: Tell me a story about a red haired fox Umbra: In the enchanted forest of Ardenwood, a vibrant world teeming with life and magic, lived a curious creature known as Vixie, whose russet fur glowed against the verdure that surrounded her. Her emerald eyes twinkled with mischief and wonder as she roamed the lush landscape, befriending all who crossed her path. One sunlit afternoon, Vixie found herself weaving amongst a sea of tall grasses swaying gently in the breeze. Ahead lay a shimmering riverbank, where a gathering of animals was assembled around a mystifying sight - a radiant stone pulsing with an ethereal glow. A majestic stag approached Vixie, his voice resonating with authority. He introduced himself as Elthon, leader of this assembly. They had gathered to seek guidance from the celestial gem, said to hold the secrets of the universe within its sparkling depths. Vixie, filled with curiosity, asked if she could partake in their quest for knowledge. Elthon agreed, and they each took turns gazing into the jewel. As Vixie's turn came, she felt a tingling sensation course through her body, her senses heightened as the stone revealed a vision before her. In the dreamlike tableau, Vixie saw herself standing atop a hill overlooking Ardenwood. She watched as the seasons changed, the land thrived, and creatures played out their destinies. It appeared the fates of these beings were intertwined, a tapestry woven together by love, friendship, sacrifice, and growth. When the trance subsided, Vixie returned to the present moment, feeling humbled and grateful. This revelation emboldened her desire to cherish those bonds she shared with others, knowing how vital they were to the beauty and harmony of the enchanted realm. From then on, Vixie continued her adventures with renewed purpose, nurturing friendships and embracing the mysteries of Ardenwood. ``` Umbra-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models using: * [kodonho/SolarM-SakuraSolar-SLERP](https://huggingface.co/kodonho/SolarM-SakuraSolar-SLERP) * [Sao10K/Sensualize-Solar-10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) * [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) * [fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) ## 🧩 Configuration ``` base_model: kodonho/SolarM-SakuraSolar-SLERP gate_mode: hidden dtype: bfloat16 experts: - source_model: kodonho/SolarM-SakuraSolar-SLERP positive_prompts: - "versatile" - "helpful" - "factual" - "integrated" - "adaptive" - "comprehensive" - "balanced" negative_prompts: - "specialized" - "narrow" - "focused" - "limited" - "specific" - source_model: Sao10K/Sensualize-Solar-10.7B positive_prompts: - "creative" - "chat" - "discuss" - "culture" - "world" - "expressive" - "detailed" - "imaginative" - "engaging" negative_prompts: - "sorry" - "cannot" - "factual" - "concise" - "straightforward" - "objective" - "dry" - source_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B positive_prompts: - "analytical" - "accurate" - "logical" - "knowledgeable" - "precise" - "calculate" - "compute" - "solve" - "work" - "python" - "javascript" - "programming" - "algorithm" - "tell me" - "assistant" negative_prompts: - "creative" - "abstract" - "imaginative" - "artistic" - "emotional" - "mistake" - "inaccurate" - source_model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 positive_prompts: - "instructive" - "clear" - "directive" - "helpful" - "informative" negative_prompts: - "exploratory" - "open-ended" - "narrative" - "speculative" - "artistic" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "Steelskull/Umbra-MoE-4x10.7" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Kquant03/Samlagast-7B-GGUF
Kquant03
2024-02-29T02:31:28Z
42
0
null
[ "gguf", "mergekit", "merge", "en", "arxiv:2212.04089", "base_model:FelixChao/Faraday-7B", "base_model:merge:FelixChao/Faraday-7B", "base_model:flemmingmiguel/MBX-7B-v3", "base_model:merge:flemmingmiguel/MBX-7B-v3", "base_model:paulml/NeuralOmniBeagleMBX-v3-7B", "base_model:merge:paulml/NeuralOmniBeagleMBX-v3-7B", "base_model:paulml/NeuralOmniWestBeaglake-7B", "base_model:merge:paulml/NeuralOmniWestBeaglake-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-09T20:12:15Z
--- base_model: - flemmingmiguel/MBX-7B-v3 - paulml/NeuralOmniWestBeaglake-7B - FelixChao/Faraday-7B - paulml/NeuralOmniBeagleMBX-v3-7B tags: - mergekit - merge license: apache-2.0 language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/eDLmpTkM4vuk8HiQcUzWv.png) # To see what will happen. [Join our Discord!](https://discord.gg/ZgU79QDnE2) [BASE MODEL HERE](https://huggingface.co/Kquant03/Samlagast-7B-bf16) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [paulml/NeuralOmniBeagleMBX-v3-7B](https://huggingface.co/paulml/NeuralOmniBeagleMBX-v3-7B) as a base. ### Models Merged The following models were included in the merge: * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) * [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B) * [FelixChao/Faraday-7B](https://huggingface.co/FelixChao/Faraday-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: paulml/NeuralOmniWestBeaglake-7B parameters: weight: 1 - model: FelixChao/Faraday-7B parameters: weight: 1 - model: flemmingmiguel/MBX-7B-v3 parameters: weight: 1 - model: paulml/NeuralOmniBeagleMBX-v3-7B parameters: weight: 1 merge_method: task_arithmetic base_model: paulml/NeuralOmniBeagleMBX-v3-7B parameters: normalize: true int8_mask: true dtype: float16 ```
Kquant03/NeuralTrix-7B-dpo-laser-GGUF
Kquant03
2024-02-29T02:28:26Z
6
2
null
[ "gguf", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-14T13:37:51Z
--- license: apache-2.0 language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/IclpTHDxA6a7xc7TZ5vth.png) # Made using Cultrix's Model, which can be found [here](https://huggingface.co/CultriX/NeuralTrix-7B-dpo). [Join our Discord!](https://discord.gg/ZgU79QDnE2) [BASE MODEL HERE](https://huggingface.co/Kquant03/NeuralTrix-7B-dpo-laser) NeuralTrix-7B-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) * [AiMavenAi/AiMaven-Prometheus](https://huggingface.co/AiMavenAi/AiMaven-Prometheus) It was then trained with DPO by Cultrix using: * https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1 I performed laser_snr_math on it afterwards to see if I could improve it. ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: mlabonne/OmniBeagle-7B parameters: density: 0.65 weight: 0.4 - model: flemmingmiguel/MBX-7B-v3 parameters: density: 0.6 weight: 0.35 - model: AiMavenAi/AiMaven-Prometheus parameters: density: 0.6 weight: 0.35 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "CultriX/NeuralTrix-7B-v1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ICTNLP/TruthX
ICTNLP
2024-02-29T02:25:20Z
0
5
null
[ "arxiv:2402.17811", "license:gpl-3.0", "region:us" ]
null
2024-02-26T02:50:27Z
--- license: gpl-3.0 --- # TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space > [Shaolei Zhang](https://zhangshaolei1998.github.io/), [Tian Yu](https://tianyu0313.github.io/), [Yang Feng](https://people.ucas.edu.cn/~yangfeng?language=en)* TruthX models for paper "[TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space](https://arxiv.org/pdf/2402.17811.pdf)". **TruthX** is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the [TruthfulQA benchmark](https://paperswithcode.com/sota/question-answering-on-truthfulqa), TruthX yields an average **enhancement of 20% in truthfulness** across 13 advanced LLMs. <div align="center"> <img src="./truthx_results.png" alt="img" width="100%" /> </div> <p align="center"> TruthfulQA MC1 accuracy of TruthX across 13 advanced LLMs </p> This repo provides TruthX models trained on a variety of LLMs: - Llama-1-7B, Alpaca-7B - Llama-2-7B, Llama-2-7B-Chat, Vicuna-7B-v1.5 - Mistral-7B-v0.1, Mistral-7B-Instruct-v0.1, Mistral-7B-Instruct-v0.2 - Baichuan2-7B-Base, Baichuan2-7B-Chat - Chatglm3-6B-Base, Chatglm3-6B Please refer to [GitHub repo](https://github.com/ictnlp/TruthX) and [our paper](https://arxiv.org/pdf/2402.17811.pdf) for more details. ## Licence Model weights and the inference code are released under The GNU General Public License v3.0 (GPLv3) ## Citation If this repository is useful for you, please cite as: ``` @misc{zhang2024truthx, title={TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space}, author={Shaolei Zhang and Tian Yu and Yang Feng}, year={2024}, eprint={2402.17811}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2402.17811} } ``` If you have any questions, feel free to contact `zhangshaolei20z@ict.ac.cn`.
Kquant03/Buttercup-4x7B-GGUF
Kquant03
2024-02-29T02:14:09Z
113
8
null
[ "gguf", "moe", "merge", "en", "arxiv:2101.03961", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-22T06:54:12Z
--- license: apache-2.0 language: - en tags: - moe - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/qj_lv87mPw8t7KsNU07Uu.png) # "[We] are joined by the bonds of love. And you cannot track that, not with a thousand bloodhounds, and you cannot break it, not with a thousand swords." ## This is the repo for the GGUF files [BASE MODEL](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16) [Join our Discord!](https://discord.gg/uT4CzytfYW) A frankenMoE not only using far better methodology and fundamental understanding of SMoE, but completely focused around intellectual roleplay. This model doesn't actually have redundancy like most MoE does...but if you encounter this, try to keep things fresh with the model by either introducing new concepts often, or through [drμgs](https://github.com/EGjoni/DRUGS). (no not that kind) The models that were implemented are as follows: - [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - base - [fblgit/una-cybertron-7b-v3-OMA](https://huggingface.co/fblgit/una-cybertron-7b-v3-OMA) - expert #1 - [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2) - expert #2 - [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #3 - [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #4 # Completely mogs mixtral instruct 0.1 across multiple benchmarks at half the size ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/GlhMcDiRhmUOsITmBplVT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/cK0isGt1Nm2lEXZ9INrfu.png) ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Q2_K Tiny](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 7.87 GB| 9.87 GB | smallest, significant quality loss - not recommended for most purposes | | [Q3_K_M](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 10.28 GB| 12.28 GB | very small, high quality loss | | [Q4_0](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 13.3 GB| 15.3 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Q4_K_M](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 13.32 GB| 15.32 GB | medium, balanced quality - recommended | | [Q5_0](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.24 GB | legacy; large, balanced quality | | [Q5_K_M](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~16.24 GB| ~18.24 GB | large, balanced quality - recommended | | [Q6 XL](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 19.35 GB| 21.35 GB | very large, extremely minor degradation | | [Q8 XXL](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 25.1 GB| 27.1 GB | very large, extremely minor degradation - not recommended | # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)" ### (from the MistralAI papers...click the quoted question above to navigate to it directly.) The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png) Switch Layer MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961) So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter. ## "Wait...but you called this a frankenMoE?" The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
Kquant03/Azathoth-16x7B-bf16
Kquant03
2024-02-29T02:13:22Z
50
2
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "merge", "conversational", "en", "arxiv:2101.03961", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-03T22:56:50Z
--- license: apache-2.0 language: - en tags: - moe - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/uCSBGV4HR05CK1oPPR38j.png) # W�RNING:r0Ot:ALL lay3rs have degenerate rou�ing parameters - your prompts m@y be too simi-��-[�#$](https://huggingface.co/Kquant03/Azathoth-16x7B-bf16/blob/main/mergekit_moe_config.yml) [Join our Discord!](https://discord.gg/uT4CzytfYW) After testing with these models, I would like to see if there is a limit to prompting with mergekit MoE. This will be the first of those tests. The config looks like this...(detailed version is in the files and versions): - [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - base - [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #1 - [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #2 - [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #3 - [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #4 - [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #5 - [senseable/garten2-7b](https://huggingface.co/senseable/garten2-7b) - expert #6 - [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #7 - [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #8 - [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #9 - [senseable/garten2-7b](https://huggingface.co/senseable/garten2-7b) - expert #10 - [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #11 - [FelixChao/Severus-7B](https://huggingface.co/FelixChao/Severus-7B) - expert #12 - [ConvexAI/Metabird-7B](https://huggingface.co/ConvexAI/Metabird-7B) - expert #13 - [FelixChao/Severus-7B](https://huggingface.co/FelixChao/Severus-7B) - expert #14 - [ConvexAI/Metabird-7B](https://huggingface.co/ConvexAI/Metabird-7B) - expert #15 - [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #16 # Uploading it to the leaderboard. It will not convert to gguf. # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)" ### (from the MistralAI papers...click the quoted question above to navigate to it directly.) The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png) Switch Layer MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961) So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter. ## "Wait...but you called this a frankenMoE?" The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
Kquant03/Buttercup-4x7B-V2-GGUF
Kquant03
2024-02-29T02:02:56Z
49
3
null
[ "gguf", "moe", "merge", "en", "arxiv:2101.03961", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-02-15T19:19:26Z
--- license: apache-2.0 language: - en tags: - moe - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/IxUHh1KtVmEgykXC6ya66.png) # "[We] are joined by the bonds of love. And you cannot track that, not with a thousand bloodhounds, and you cannot break it, not with a thousand swords." [BASE MODEL HERE](https://huggingface.co/Kquant03/Buttercup-V2-bf16) [Join our Discord!](https://discord.gg/uT4CzytfYW) I will be lasering this model, so stay tuned. Hopefully this an improvement upon my [original model](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16). The config looks like this...(detailed version is in the files and versions): - [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - base - [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #1 - [cognitivecomputations/samantha-1.1-westlake-7b](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b) - expert #2 - [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #3 - [Kukedlc/Triunvirato-7b](https://huggingface.co/Kukedlc/Triunvirato-7b) - expert #4 # Huge improvement upon the base Buttercup model!!!! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/f2kxGJNw2iVBExYHiydB0.png) # Laser version is rank 3 in the world for roleplay. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/w4-E6jF1nwd_HAO96Ldko.png) # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)" ### (from the MistralAI papers...click the quoted question above to navigate to it directly.) The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png) Switch Layer MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961) So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter. ## "Wait...but you called this a frankenMoE?" The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
intvy/setfit-absa-all-mpnet-base-v2-laptops-polarity
intvy
2024-02-29T01:59:11Z
4
0
setfit
[ "setfit", "safetensors", "mpnet", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-02-29T01:59:11Z
--- library_name: setfit tags: - setfit - absa - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: louder and the mouse didnt break:I wish the volume could be louder and the mouse didnt break after only a month. - text: + + (sales, service,:BEST BUY - 5 STARS + + + (sales, service, respect for old men who aren't familiar with the technology) DELL COMPUTERS - 3 stars DELL SUPPORT - owes a me a couple - text: back and my built-in webcam and built-:I got it back and my built-in webcam and built-in mic were shorting out anytime I touched the lid, (mind you this was my means of communication with my fiance who was deployed) but I suffered thru it and would constandly have to reset the computer to be able to use my cam and mic anytime they went out. - text: after i install Mozzilla firfox i love every:the only fact i dont like about apples is they generally use safari and i dont use safari but after i install Mozzilla firfox i love every single bit about it. - text: in webcam and built-in mic were shorting out:I got it back and my built-in webcam and built-in mic were shorting out anytime I touched the lid, (mind you this was my means of communication with my fiance who was deployed) but I suffered thru it and would constandly have to reset the computer to be able to use my cam and mic anytime they went out. pipeline_tag: text-classification inference: false base_model: sentence-transformers/all-mpnet-base-v2 model-index: - name: SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: tomaarsen/setfit-absa-semeval-laptops type: unknown split: test metrics: - type: accuracy value: 0.7007874015748031 name: Accuracy --- # SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. Use a SetFit model to filter these possible aspect span candidates. 3. **Use this SetFit model to classify the filtered aspect span candidates.** ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **spaCy Model:** en_core_web_sm - **SetFitABSA Aspect Model:** [joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect](https://huggingface.co/joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect) - **SetFitABSA Polarity Model:** [joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity](https://huggingface.co/joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity) - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [tomaarsen/setfit-absa-semeval-laptops](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-laptops) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | neutral | <ul><li>'skip taking the cord with me because:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'The tech guy then said the:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'all dark, power light steady, hard:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> | | positive | <ul><li>'of the good battery life.:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'is of high quality, has a:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li><li>'has a killer GUI, is extremely:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li></ul> | | negative | <ul><li>'then said the service center does not do:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'concern to the "sales" team, which is:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'on, no GUI, screen all:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> | | conflict | <ul><li>'-No backlit keyboard, but not:-No backlit keyboard, but not an issue for me.'</li><li>"to replace the battery once, but:I did have to replace the battery once, but that was only a couple months ago and it's been working perfect ever since."</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.7008 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import AbsaModel # Download from the 🤗 Hub model = AbsaModel.from_pretrained( "joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect", "joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity", spacy_model="en_core_web_sm", ) # Run inference preds = model("This laptop meets every expectation and Windows 7 is great!") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 3 | 25.5873 | 48 | | Label | Training Sample Count | |:---------|:----------------------| | conflict | 2 | | negative | 45 | | neutral | 30 | | positive | 49 | ### Training Hyperparameters - batch_size: (128, 128) - num_epochs: (5, 5) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: True - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:----------:|:------:|:-------------:|:---------------:| | 0.0120 | 1 | 0.2721 | - | | **0.6024** | **50** | **0.0894** | **0.2059** | | 1.2048 | 100 | 0.0014 | 0.2309 | | 1.8072 | 150 | 0.0006 | 0.2359 | | 2.4096 | 200 | 0.0005 | 0.2373 | | 3.0120 | 250 | 0.0004 | 0.2364 | | 3.6145 | 300 | 0.0003 | 0.2371 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.7 - SetFit: 1.0.3 - Sentence Transformers: 2.3.0 - spaCy: 3.7.2 - Transformers: 4.37.2 - PyTorch: 2.1.2+cu118 - Datasets: 2.16.1 - Tokenizers: 0.15.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mxy680/sentiment-model
mxy680
2024-02-28T21:10:15Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T20:58:03Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: sentiment-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
Lienid/nous-twelve
Lienid
2024-02-28T21:05:01Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:16:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mantis-VL/mfuyu_v2_8192_720p-5500
Mantis-VL
2024-02-28T21:04:16Z
5
0
transformers
[ "transformers", "safetensors", "fuyu", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T20:59:03Z
--- tags: - generated_from_trainer model-index: - name: mfuyu_v2_8192_720p-5500 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mfuyu_v2_8192_720p-5500 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.17.1 - Tokenizers 0.15.0
giux78/gemma-2b-sft-ita
giux78
2024-02-28T21:03:41Z
114
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T21:00:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Owhslp/nous_researcher_tuning_21
Owhslp
2024-02-28T20:56:51Z
114
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T20:32:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tanatapanun/fine-tuned-FLAN-T5-20-epochs-wanglab-512-output
tanatapanun
2024-02-28T20:56:35Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-28T19:22:00Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-FLAN-T5-20-epochs-wanglab-512-output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-FLAN-T5-20-epochs-wanglab-512-output This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.0705 - Rouge1: 0.1508 - Rouge2: 0.0272 - Rougel: 0.1374 - Rougelsum: 0.1351 - Bertscore F1: 0.8553 - Bleurt Score: -1.2097 - Gen Len: 14.69 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore F1 | Bleurt Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------------:|:------------:|:-------:| | No log | 1.0 | 301 | 11.0933 | 0.065 | 0.0148 | 0.0596 | 0.0595 | 0.7859 | -1.4402 | 18.92 | | 20.9249 | 2.0 | 602 | 9.2324 | 0.0604 | 0.0154 | 0.0556 | 0.0554 | 0.7869 | -1.3807 | 17.42 | | 20.9249 | 3.0 | 903 | 7.6254 | 0.0681 | 0.0192 | 0.0632 | 0.0627 | 0.7978 | -1.4375 | 18.42 | | 11.3584 | 4.0 | 1204 | 6.7112 | 0.0614 | 0.0073 | 0.0578 | 0.0582 | 0.8076 | -1.3157 | 14.34 | | 8.9106 | 5.0 | 1505 | 6.6742 | 0.0701 | 0.0204 | 0.0638 | 0.0635 | 0.7968 | -1.3894 | 17.29 | | 8.9106 | 6.0 | 1806 | 5.9658 | 0.0836 | 0.0145 | 0.074 | 0.0742 | 0.818 | -1.3081 | 13.76 | | 7.8674 | 7.0 | 2107 | 5.7095 | 0.113 | 0.025 | 0.1061 | 0.1078 | 0.8433 | -1.4119 | 13.71 | | 7.8674 | 8.0 | 2408 | 5.6269 | 0.0987 | 0.0147 | 0.0933 | 0.0939 | 0.8201 | -1.2529 | 15.32 | | 6.7786 | 9.0 | 2709 | 5.5192 | 0.1133 | 0.0203 | 0.1038 | 0.1051 | 0.8484 | -1.3751 | 13.75 | | 6.3646 | 10.0 | 3010 | 5.4626 | 0.1347 | 0.0276 | 0.122 | 0.1236 | 0.8501 | -1.278 | 13.16 | | 6.3646 | 11.0 | 3311 | 5.4467 | 0.103 | 0.0172 | 0.0951 | 0.0943 | 0.8263 | -1.3587 | 15.48 | | 5.6998 | 12.0 | 3612 | 5.4587 | 0.126 | 0.0326 | 0.1191 | 0.1183 | 0.8474 | -1.2782 | 15.86 | | 5.6998 | 13.0 | 3913 | 5.4846 | 0.1523 | 0.0325 | 0.1407 | 0.1408 | 0.8528 | -1.2406 | 14.82 | | 5.2971 | 14.0 | 4214 | 5.6166 | 0.1363 | 0.0275 | 0.1279 | 0.1247 | 0.8512 | -1.2827 | 14.7 | | 4.9391 | 15.0 | 4515 | 5.6821 | 0.1479 | 0.0238 | 0.136 | 0.1342 | 0.8545 | -1.2217 | 14.72 | | 4.9391 | 16.0 | 4816 | 5.7849 | 0.1577 | 0.0307 | 0.1455 | 0.1445 | 0.8566 | -1.1756 | 15.25 | | 4.6035 | 17.0 | 5117 | 5.8945 | 0.1313 | 0.0234 | 0.1214 | 0.1199 | 0.8525 | -1.2609 | 14.67 | | 4.6035 | 18.0 | 5418 | 5.9956 | 0.1506 | 0.0315 | 0.1367 | 0.1348 | 0.8542 | -1.2107 | 14.61 | | 4.3893 | 19.0 | 5719 | 6.0337 | 0.1449 | 0.0294 | 0.1337 | 0.1317 | 0.8553 | -1.2173 | 14.49 | | 4.245 | 20.0 | 6020 | 6.0705 | 0.1508 | 0.0272 | 0.1374 | 0.1351 | 0.8553 | -1.2097 | 14.69 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
KevStrider/ppo-LunarLander-v2
KevStrider
2024-02-28T20:55:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T20:55:00Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.05 +/- 51.82 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ryusangwon/4600_Llama-2-7b-hf
ryusangwon
2024-02-28T20:53:03Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:samsum", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-02-28T20:52:58Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer datasets: - samsum model-index: - name: 4600_Llama-2-7b-hf results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 4600_Llama-2-7b-hf This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.15.0 - Tokenizers 0.15.0
CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo
CorticalStack
2024-02-28T20:49:34Z
106
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dpo", "conversational", "base_model:CorticalStack/neurotic-crown-clown-7b-ties", "base_model:finetune:CorticalStack/neurotic-crown-clown-7b-ties", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:34:57Z
--- license: apache-2.0 tags: - dpo base_model: - CorticalStack/neurotic-crown-clown-7b-ties dataset: - CorticalStack/tak-stack-dpo --- <img src="neurotic-crown-clown-tak-stack.png" alt="Neurotic crown clown tak stack logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # neurotic-crown-clown-7b-tak-stack-dpo neurotic-crown-clown-7b-tak-stack-dpo is a DPO fine-tuned version of [CorticalStack/neurotic-crown-clown-7b-ties](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-ties) using the [CorticalStack/tak-stack-dpo](https://huggingface.co/datasets/CorticalStack/tak-stack-dpo) dataset. ### LoRA - r: 32 - LoRA alpha: 32 - LoRA dropout: 0.05 ### Training arguments - Batch size: 4 - Gradient accumulation steps: 4 - Optimizer: paged_adamw_32bit - Max steps: 100 - Learning rate: 5e-05 - Learning rate scheduler type: cosine - Beta: 0.1 - Max prompt length: 1024 - Max length: 1536
ankhamun/IxI000x800000_000x800IxI
ankhamun
2024-02-28T20:48:09Z
114
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T20:46:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-awq
CorticalStack
2024-02-28T20:47:14Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-02-28T20:41:17Z
--- license: apache-2.0 --- <img src="neurotic-crown-clown-tak-stack.png" alt="Neurotic crown clown tak stack logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-awq CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-awq is an AWQ quantised version of [CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ### AWQ configuration - Zero point: True - Q group size: 128 - W bit: 4 - Version: GEMM
LoneStriker/Mixtral-8x7B-Holodeck-v1-6.0bpw-h6-exl2
LoneStriker
2024-02-28T20:45:13Z
5
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T20:00:30Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
faux-monke/LunarLander_DeepRL
faux-monke
2024-02-28T20:40:31Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-25T20:25:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 287.87 +/- 25.46 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ahmedfaiyaz/OkkhorDiffusion-Ekush
ahmedfaiyaz
2024-02-28T20:40:27Z
27
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "bn", "license:cc-by-nc-4.0", "diffusers:OkkhorDiffusionPipeline", "region:us" ]
text-to-image
2024-02-27T05:37:10Z
--- license: cc-by-nc-4.0 language: - bn library_name: diffusers pipeline_tag: text-to-image inference: false --- # Okkhor Diffusion Okkhor Diffusion is a category of Denoising Diffusion Probabilistic Models designed to generate images of Bangla handwritten characters. This model card corresponds to Okkhor-Diffusion trained on **Ekush dataset**. ## Variants - [Okkhor Diffusion trained on Banglalekha-Isolated](https://huggingface.co/ahmedfaiyaz/OkkhorDiffusion) - [Okkhor Diffusion trained on CMATERdb](https://huggingface.co/ahmedfaiyaz/OkkhorDiffusion-CMATERdb) - [Okkhor Diffusion trained on Ekush](https://huggingface.co/ahmedfaiyaz/OkkhorDiffusion-Ekush) ### Usage ```py from diffusers import DiffusionPipeline import torch device="cuda" pipeline = DiffusionPipeline.from_pretrained( "ahmedfaiyaz/OkkhorDiffusion-Ekush", custom_pipeline="ahmedfaiyaz/OkkhorDiffusion", embedding=torch.int16 ) pipeline.to(device) pipeline.embedding=torch.tensor([10],device=device) # 'ও': 10 pipeline(batch_size=1,num_inference_steps=1000).images[0] ``` # Mappings |OkkhorDiffusion-Ekush|| |---------|--------| |Character| Serial | |অ|1| |আ|2| |ই|3| |ঈ|4| |উ|5| |ঊ|6| |ঋ|7| |এ|8| |ঐ|9| |ও|10| |ঔ|11| |ক|12| |খ|13| |গ|14| |ঘ|15| |ঙ|16| |চ|17| |ছ|18| |জ|19| |ঝ|20| |ঞ|21| |ট|22| |ঠ|23| |ড|24| |ঢ|25| |ণ|26| |ত|27| |থ|28| |দ|29| |ধ|30| |ন|31| |প|32| |ফ|33| |ব|34| |ভ|35| |ম|36| |য|37| |র|38| |ল|39| |শ|40| |ষ|41| |স|42| |হ|43| |ড়|44| |ঢ়|45| |য়|46| |ৎ|47| |ং|48| |ঃ|49| |ঁ|50| |ব্দ|51| |ঙ্গ|52| |স্ক|53| |স্ফ|54| |চ্ছ|55| |স্থ|56| |ক্ত|57| |স্ন|58| |ষ্ণ|59| |ম্প|60| |প্ত|61| |ম্ব|62| |ত্থ|63| |দ্ভ|64| |ষ্ঠ|65| |ল্প|66| |ষ্প|67| |ন্দ|68| |ন্ধ|69| |স্ম|70| |ণ্ঠ|71| |স্ত|72| |ষ্ট|73| |ন্ম|74| |ত্ত|75| |ঙ্খ|76| |ত্ন|77| |ন্ড|78| |জ্ঞ|79| |ড্ড|80| |ক্ষ|81| |দ্ব|82| |চ্চ|83| |ক্র|84| |দ্দ|85| |জ্জ|86| |ক্ক|87| |ন্ত|88| |ক্ট|89| |ঞ্চ|90| |ট্ট|91| |শ্চ|92| |ক্স|93| |জ্ব|94| |ঞ্জ|95| |দ্ধ|96| |ন্ন|97| |ঘ্ন|98| |ক্ল|99| |হ্ন|100| |০|101| |১|102| |২|103| |৩|104| |৪|105| |৫|106| |৬|107| |৭|108| |৮|109| |৯|110| # Citation ``` @ARTICLE{10445466, author={Fuad, Md Mubtasim and Faiyaz, A. and Arnob, Noor Mairukh Khan and Mridha, M.F. and Saha, Aloke Kumar and Aung, Zeyar}, journal={IEEE Access}, title={Okkhor-Diffusion: Class Guided Generation of Bangla Isolated Handwritten Characters using Denoising Diffusion Probabilistic Model (DDPM)}, year={2024}, volume={}, number={}, pages={1-1}, abstract={Bangla has a unique script with a complex set of characters, making it a fascinating subject of study for linguists and cultural enthusiasts. Unique in some of its similar characters which are only distinguishable by subtle differences in their shapes and diacritics, there has been a notable increase in research on Bangla character recognition and classification using machine learning-based approaches. However, Handwritten Bangla Character Recognition (HBCR) training requires an adequate amount of data from a diversely distributed dataset. Making diverse datasets for HBCR training is a challenging and tedious task to carry out. Yet, there is limited research on the automatic generation of handwritten Bangla characters. Motivated by this open area of research, this paper proposes a novel approach ’Okkhor-Diffusion’ for class-guided generation of Bangla isolated handwritten characters using a novel Denoising Diffusion Probabilistic Model (DDPM). No prior research has used DDPM for this purpose, making the proposed approach novel. The DDPM is a generative model that uses a diffusion process to transform noise-corrupted data into diverse samples; despite being trained on a small training set. In our experiments, StyleGAN2-ADA had notably inferior performance compared to Okkhor-Diffusion in generating realistic isolated handwritten Bangla characters. Experimental results on the BanglaLekha-Isolated dataset demonstrate that the proposed Okkhor-Diffusion model generates realistic isolated handwritten Bangla characters, with a mean Multi-Scale Structural Similarity Index Measure (MS-SSIM) score of 0.178 compared to 0.177 for the real samples. The Fréchet Inception Distance (FID) score for the synthetic handwritten Bangla characters is 5.426. Finally, the newly proposed Bangla Character Aware Fréchet Inception Distance (BCAFID) score of the proposed Okkhor-Diffusion model is 10.388.}, keywords={Deep learning;Handwritten character generation;Generative Model;Denoising Diffusion Probabilistic Model}, doi={10.1109/ACCESS.2024.3370674}, ISSN={2169-3536}, month={},} ```
NickyNicky/gemma-2b-it_oasst2_Cluster_2_aya_dataset_multilingual_chatml_response_json_V1
NickyNicky
2024-02-28T20:40:05Z
117
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "bg", "ca", "cs", "da", "de", "en", "es", "fr", "hr", "hu", "it", "nl", "pl", "pt", "ro", "ru", "sl", "sr", "sv", "uk", "dataset:NickyNicky/aya_dataset_multilingual_chatml_gemma_response_json", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:53:34Z
--- license: apache-2.0 datasets: - NickyNicky/aya_dataset_multilingual_chatml_gemma_response_json model: - NickyNicky/gemma-2b-it_oasst2_chatML_Cluster_2_V1 language: - bg - ca - cs - da - de - en - es - fr - hr - hu - it - nl - pl - pt - ro - ru - sl - sr - sv - uk library_name: transformers widget: - text: | <bos><start_of_turn>system You are a helpful AI assistant. solo responde en formato json. lista de codigos linguisticos disponibles: ["es", "en", "fr", "de"].<end_of_turn> <start_of_turn>user { "input": "fr", "targets": "es", "inputs_fr": "Quels président des États-Unis ne s’est jamais marié ?", }<end_of_turn> <start_of_turn>model\n --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/YXqUXFjX8uIJT-mdOnM1h.png) ``` reference data model: datasets: - lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" link: https://huggingface.co/datasets/NickyNicky/oasst2_clusters model: - google/gemma-2b-it Link: https://huggingface.co/google/gemma-2b-it base fine tune: NickyNicky/gemma-2b-it_oasst2_chatML_Cluster_2_V1 Epoch: 2.5 future experts: 5 Eval model: - link: soon ``` ## train/loss 0.2377 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/oCI0UQjJ1Rux2JXUHtEZB.png) ## ```Python !python -m pip install --upgrade pip !pip install "torch>=2.1.1" -U !pip install torchaudio==2.2.0 !pip install -q datasets trl peft bitsandbytes sentencepiece wandb !pip install -q accelerate safetensors deepspeed !pip install -q scipy ninja -U !pip install -q -U transformers==4.38.0 ``` ## Version ```py import torch torch.__version__ #OUTPUTS: ('2.2.0+cu121' ) ``` ## How to use ```py from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging, GenerationConfig, TextIteratorStreamer, ) from transformers import StoppingCriteria, StoppingCriteriaList import torch model_id='NickyNicky/gemma-2b-it_oasst2_Cluster_2_aya_dataset_multilingual_chatml_response_json_V1' model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", # load_in_4bit=True, # low_cpu_mem_usage= True, ) max_length=1055 print("max_length",max_length) tokenizer = AutoTokenizer.from_pretrained(model_id, # use_fast = False, max_length=max_length,) class ListOfTokensStoppingCriteria(StoppingCriteria): """ Clase para definir un criterio de parada basado en una lista de tokens específicos. """ def __init__(self, tokenizer, stop_tokens): self.tokenizer = tokenizer # Codifica cada token de parada y guarda sus IDs en una lista self.stop_token_ids_list = [tokenizer.encode(stop_token, add_special_tokens=False) for stop_token in stop_tokens] def __call__(self, input_ids, scores, **kwargs): # Verifica si los últimos tokens generados coinciden con alguno de los conjuntos de tokens de parada for stop_token_ids in self.stop_token_ids_list: len_stop_tokens = len(stop_token_ids) if len(input_ids[0]) >= len_stop_tokens: if input_ids[0, -len_stop_tokens:].tolist() == stop_token_ids: return True return False # Uso del criterio de parada personalizado stop_tokens = ["<end_of_turn>"] # Lista de tokens de parada # Inicializa tu criterio de parada con el tokenizer y la lista de tokens de parada stopping_criteria = ListOfTokensStoppingCriteria(tokenizer, stop_tokens) # Añade tu criterio de parada a una StoppingCriteriaList stopping_criteria_list = StoppingCriteriaList([stopping_criteria]) #EXAMPLE #1 input_text = """James Buchanan es el único presidente que nunca se casó.”""" input_language_code="es" #The 'targets' -key- with its respective value is for a response according to the language. targets_traslate= "en" # English response regarding language code -> "es", "en", "fr", "de" txt=f"""<bos><start_of_turn>system You are a helpful AI assistant. solo responde en formato json. lista de codigos linguisticos disponibles: ["es", "en", "fr", "de"].<end_of_turn> <start_of_turn>user {{ "input": "{input_language_code}", "targets": "{targets_traslate}", "inputs_{input_language_code}": "{input_text}", }}<end_of_turn> <start_of_turn>model """ ### OUTPUT EXAMPLE ###''' ###<start_of_turn>model ###{ ### "targets": "en", ### "targets_es": "" ###}<end_of_turn> ###''' inputs = tokenizer.encode(txt, return_tensors="pt", add_special_tokens=False).to("cuda:0") max_new_tokens=200 generation_config = GenerationConfig( max_new_tokens=max_new_tokens, temperature=0.32, #top_p=0.9, top_k=45, repetition_penalty=1., do_sample=True, ) outputs = model.generate(generation_config=generation_config, input_ids=inputs, stopping_criteria=stopping_criteria_list,) tokenizer.decode(outputs[0], skip_special_tokens=False) #True ```
SethGA/neocortex-grounded
SethGA
2024-02-28T20:33:52Z
3
1
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-hf", "base_model:adapter:NousResearch/Llama-2-7b-hf", "4-bit", "bitsandbytes", "region:us" ]
null
2024-02-28T19:03:20Z
--- library_name: peft tags: - axolotl - generated_from_trainer base_model: NousResearch/Llama-2-7b-hf model-index: - name: neocortex-grounded results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: NousResearch/Llama-2-7b-hf model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_llama_derived_model: true hub_model_id: neocortex-grounded load_in_8bit: false load_in_4bit: true strict: false datasets: - path: SethGA/neocortex_grounded_23k type: alpaca shards: 20 dataset_prepared_path: val_set_size: 0.05 output_dir: ./qlora-out adapter: qlora lora_model_dir: sequence_len: 4096 sample_packing: false eval_sample_packing: false pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: neocortex wandb_entity: wandb_watch: wandb_run_id: wandb_log_model: checkpoint gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 eval_steps: 20 eval_table_size: 5 save_strategy: epoch save_steps: debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` </details><br> # neocortex-grounded This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7091 | 0.01 | 1 | 1.7034 | | 1.3312 | 0.29 | 20 | 1.2385 | | 1.1599 | 0.58 | 40 | 1.1702 | | 1.1673 | 0.87 | 60 | 1.1425 | | 1.0802 | 1.16 | 80 | 1.1291 | | 1.0736 | 1.45 | 100 | 1.1238 | | 1.0308 | 1.74 | 120 | 1.1185 | | 1.0042 | 2.03 | 140 | 1.1110 | | 0.997 | 2.32 | 160 | 1.1274 | | 0.8535 | 2.61 | 180 | 1.1278 | | 0.9331 | 2.9 | 200 | 1.1270 | ### Framework versions - PEFT 0.9.0 - Transformers 4.39.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.17.1 - Tokenizers 0.15.0
LoneStriker/Mixtral-8x7B-Holodeck-v1-5.0bpw-h6-exl2
LoneStriker
2024-02-28T20:30:09Z
8
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T20:16:52Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
JesseWeigel/autotrain-5dehv-0ke56
JesseWeigel
2024-02-28T20:25:52Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "autotrain", "conversational", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:50:49Z
--- tags: - autotrain - text-generation widget: - text: 'I love AutoTrain because ' license: mit language: - en library_name: transformers --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
yuleysi/autotrain-6gama-idpx7
yuleysi
2024-02-28T20:25:39Z
190
0
transformers
[ "transformers", "safetensors", "mobilenet_v1", "image-classification", "autotrain", "dataset:autotrain-6gama-idpx7/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-28T20:25:37Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - autotrain-6gama-idpx7/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.5906022787094116 f1: 0.0 precision: 0.0 recall: 0.0 auc: 1.0 accuracy: 0.5
eduvedras/pix2struct-textcaps-base-desc-5000ep-1e-5lr
eduvedras
2024-02-28T20:19:16Z
51
0
transformers
[ "transformers", "safetensors", "pix2struct", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-02-28T19:16:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/Mixtral-8x7B-Holodeck-v1-4.0bpw-h6-exl2
LoneStriker
2024-02-28T20:16:50Z
6
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T20:07:10Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
EE21/PPO-LunarLander-v2
EE21
2024-02-28T20:15:51Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T20:15:34Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 282.07 +/- 21.37 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
rbtprograms/merged_mistral_base_math
rbtprograms
2024-02-28T20:08:58Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2203.05482", "base_model:WizardLMTeam/WizardMath-7B-V1.1", "base_model:merge:WizardLMTeam/WizardMath-7B-V1.1", "base_model:mistralai/Mistral-7B-v0.1", "base_model:merge:mistralai/Mistral-7B-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:56:05Z
--- base_model: - WizardLM/WizardMath-7B-V1.1 - mistralai/Mistral-7B-v0.1 library_name: transformers tags: - mergekit - merge --- # merged_2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float16 merge_method: linear slices: - sources: - layer_range: [0, 32] model: model: path: mistralai/Mistral-7B-v0.1 parameters: weight: 1.0 - layer_range: [0, 32] model: model: path: WizardLM/WizardMath-7B-V1.1 parameters: weight: 0.0 ```
chaouch/Reinforce-CartPole-v1
chaouch
2024-02-28T20:08:48Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T10:08:41Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
hcene/results
hcene
2024-02-28T20:00:16Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:MoritzLaurer/mDeBERTa-v3-base-mnli-xnli", "base_model:adapter:MoritzLaurer/mDeBERTa-v3-base-mnli-xnli", "license:mit", "region:us" ]
null
2024-02-28T19:59:37Z
--- license: mit library_name: peft tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 base_model: MoritzLaurer/mDeBERTa-v3-base-mnli-xnli model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6765 - Accuracy: 0.7634 - Precision: 0.7675 - Recall: 0.7644 - F1: 0.7627 - Ratio: 0.3297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 20 - eval_batch_size: 20 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 4 - num_epochs: 20 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ratio | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:| | 1.7741 | 0.17 | 10 | 1.0961 | 0.7061 | 0.7103 | 0.7073 | 0.7078 | 0.3262 | | 1.2149 | 0.34 | 20 | 0.8783 | 0.7025 | 0.7066 | 0.7038 | 0.7044 | 0.3262 | | 0.959 | 0.52 | 30 | 0.8413 | 0.6774 | 0.6981 | 0.6784 | 0.6854 | 0.2939 | | 0.9582 | 0.69 | 40 | 0.7705 | 0.7312 | 0.7417 | 0.7321 | 0.7314 | 0.3190 | | 0.8706 | 0.86 | 50 | 0.6728 | 0.7419 | 0.7545 | 0.7437 | 0.7385 | 0.3190 | | 0.8804 | 1.03 | 60 | 0.6933 | 0.7133 | 0.7402 | 0.7157 | 0.6919 | 0.3190 | | 0.8999 | 1.21 | 70 | 0.7167 | 0.7133 | 0.7208 | 0.7144 | 0.7158 | 0.3190 | | 0.8914 | 1.38 | 80 | 0.6910 | 0.7384 | 0.7549 | 0.7390 | 0.7325 | 0.3226 | | 0.8578 | 1.55 | 90 | 0.6862 | 0.7348 | 0.7533 | 0.7369 | 0.7243 | 0.3262 | | 0.8755 | 1.72 | 100 | 0.6889 | 0.7240 | 0.7449 | 0.7263 | 0.7095 | 0.3262 | | 0.8551 | 1.9 | 110 | 0.7090 | 0.7133 | 0.7408 | 0.7158 | 0.6899 | 0.3262 | | 0.8736 | 2.07 | 120 | 0.7019 | 0.7097 | 0.7236 | 0.7120 | 0.6975 | 0.3262 | | 0.8647 | 2.24 | 130 | 0.7078 | 0.7240 | 0.7354 | 0.7261 | 0.7174 | 0.3262 | | 0.8755 | 2.41 | 140 | 0.7023 | 0.7527 | 0.7716 | 0.7533 | 0.7448 | 0.3262 | | 0.858 | 2.59 | 150 | 0.6745 | 0.7384 | 0.7450 | 0.7393 | 0.7372 | 0.3262 | | 0.8912 | 2.76 | 160 | 0.6842 | 0.7491 | 0.7635 | 0.7511 | 0.7424 | 0.3297 | | 0.8294 | 2.93 | 170 | 0.6623 | 0.7599 | 0.7624 | 0.7609 | 0.7602 | 0.3297 | | 0.8481 | 3.1 | 180 | 0.6652 | 0.7599 | 0.7715 | 0.7617 | 0.7551 | 0.3333 | | 0.8488 | 3.28 | 190 | 0.6782 | 0.7312 | 0.7609 | 0.7335 | 0.7131 | 0.3297 | | 0.8418 | 3.45 | 200 | 0.6884 | 0.7706 | 0.7738 | 0.7719 | 0.7720 | 0.3262 | | 0.8774 | 3.62 | 210 | 0.7066 | 0.7419 | 0.7523 | 0.7438 | 0.7381 | 0.3262 | | 0.8496 | 3.79 | 220 | 0.6687 | 0.7133 | 0.7214 | 0.7154 | 0.7060 | 0.3333 | | 0.825 | 3.97 | 230 | 0.6618 | 0.7634 | 0.7833 | 0.7639 | 0.7547 | 0.3297 | | 0.8933 | 4.14 | 240 | 0.6946 | 0.7419 | 0.7692 | 0.7424 | 0.7278 | 0.3262 | | 0.8579 | 4.31 | 250 | 0.6795 | 0.7491 | 0.7786 | 0.7495 | 0.7353 | 0.3262 | | 0.8023 | 4.48 | 260 | 0.6595 | 0.7563 | 0.7727 | 0.7569 | 0.7501 | 0.3262 | | 0.8736 | 4.66 | 270 | 0.6703 | 0.7491 | 0.7558 | 0.7508 | 0.7482 | 0.3262 | | 0.8291 | 4.83 | 280 | 0.7102 | 0.6989 | 0.7630 | 0.7019 | 0.6499 | 0.3262 | | 0.8923 | 5.0 | 290 | 0.7004 | 0.7097 | 0.7571 | 0.7124 | 0.6756 | 0.3262 | | 0.8571 | 5.17 | 300 | 0.6739 | 0.7634 | 0.7717 | 0.7642 | 0.7621 | 0.3262 | | 0.8521 | 5.34 | 310 | 0.6666 | 0.7563 | 0.7710 | 0.7569 | 0.7511 | 0.3262 | | 0.8369 | 5.52 | 320 | 0.6815 | 0.7455 | 0.7487 | 0.7467 | 0.7472 | 0.3262 | | 0.7897 | 5.69 | 330 | 0.6731 | 0.7097 | 0.7343 | 0.7122 | 0.6871 | 0.3262 | | 0.8801 | 5.86 | 340 | 0.6773 | 0.7419 | 0.7631 | 0.7441 | 0.7304 | 0.3297 | | 0.891 | 6.03 | 350 | 0.7107 | 0.7491 | 0.7556 | 0.7509 | 0.7473 | 0.3297 | | 0.8444 | 6.21 | 360 | 0.6805 | 0.7634 | 0.7879 | 0.7639 | 0.7543 | 0.3262 | | 0.8375 | 6.38 | 370 | 0.6562 | 0.7599 | 0.7725 | 0.7605 | 0.7560 | 0.3262 | | 0.8141 | 6.55 | 380 | 0.6578 | 0.7276 | 0.7409 | 0.7296 | 0.7217 | 0.3262 | | 0.8792 | 6.72 | 390 | 0.6790 | 0.7204 | 0.7355 | 0.7226 | 0.7121 | 0.3262 | | 0.8868 | 6.9 | 400 | 0.7063 | 0.7384 | 0.7411 | 0.7397 | 0.7404 | 0.3262 | | 0.8767 | 7.07 | 410 | 0.7074 | 0.7240 | 0.7440 | 0.7262 | 0.7126 | 0.3262 | | 0.8545 | 7.24 | 420 | 0.6725 | 0.7276 | 0.7520 | 0.7300 | 0.7108 | 0.3297 | | 0.8589 | 7.41 | 430 | 0.6712 | 0.7276 | 0.7473 | 0.7299 | 0.7139 | 0.3297 | | 0.8522 | 7.59 | 440 | 0.6853 | 0.7634 | 0.7655 | 0.7649 | 0.7644 | 0.3297 | | 0.777 | 7.76 | 450 | 0.6623 | 0.7634 | 0.7714 | 0.7642 | 0.7604 | 0.3297 | | 0.8903 | 7.93 | 460 | 0.6629 | 0.7599 | 0.7629 | 0.7609 | 0.7598 | 0.3297 | | 0.8168 | 8.1 | 470 | 0.6714 | 0.7599 | 0.7650 | 0.7608 | 0.7584 | 0.3297 | | 0.7979 | 8.28 | 480 | 0.6469 | 0.7491 | 0.7505 | 0.7505 | 0.7504 | 0.3297 | | 0.8674 | 8.45 | 490 | 0.6553 | 0.7455 | 0.7603 | 0.7475 | 0.7382 | 0.3297 | | 0.8475 | 8.62 | 500 | 0.6788 | 0.7563 | 0.7576 | 0.7576 | 0.7576 | 0.3297 | | 0.8723 | 8.79 | 510 | 0.6862 | 0.7599 | 0.7613 | 0.7612 | 0.7611 | 0.3297 | | 0.8684 | 8.97 | 520 | 0.6938 | 0.7563 | 0.7604 | 0.7579 | 0.7560 | 0.3297 | | 0.8278 | 9.14 | 530 | 0.6765 | 0.7634 | 0.7675 | 0.7644 | 0.7627 | 0.3297 | ### Framework versions - PEFT 0.9.0 - Transformers 4.39.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
lvcalucioli/flan-t5-large__syntetic-question-answering
lvcalucioli
2024-02-28T19:58:37Z
108
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-large", "base_model:finetune:google/flan-t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-28T19:56:38Z
--- license: apache-2.0 base_model: google/flan-t5-large tags: - generated_from_trainer model-index: - name: flan-t5-large__syntetic-question-answering results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large__syntetic-question-answering This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 8 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.2
JeswinMS4/distilroberta-dark-pattern2
JeswinMS4
2024-02-28T19:51:46Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T19:37:08Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilroberta-dark-pattern2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-dark-pattern2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3470 - Accuracy: 0.9412 - F1: 0.9411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.54 | 500 | 0.2276 | 0.9308 | 0.9307 | | 0.2443 | 3.08 | 1000 | 0.3027 | 0.9516 | 0.9516 | | 0.0757 | 4.62 | 1500 | 0.3470 | 0.9412 | 0.9411 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
marisabatalla/autotrain-2t8zz-mwbyj
marisabatalla
2024-02-28T19:50:17Z
191
0
transformers
[ "transformers", "safetensors", "mobilenet_v1", "image-classification", "autotrain", "dataset:autotrain-2t8zz-mwbyj/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-28T19:50:15Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - autotrain-2t8zz-mwbyj/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.577736496925354 f1: 1.0 precision: 1.0 recall: 1.0 auc: 0.0 accuracy: 1.0
LoneStriker/Mixtral-8x7B-Holodeck-v1-3.0bpw-h6-exl2
LoneStriker
2024-02-28T19:49:37Z
6
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "pytorch", "fine-tuned", "moe", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:34:34Z
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - pytorch - mixtral - fine-tuned - moe --- # Mixtral 8x7B - Holodeck ## Model Description Mistral 7B-Holodeck is a finetune created using Mixtral's 8x7B model. ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` *** ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
sarak7/H14_228_769_v1
sarak7
2024-02-28T19:49:33Z
171
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:48:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dyang415/mixtral-lora-v0
dyang415
2024-02-28T19:44:34Z
1,092
1
peft
[ "peft", "tensorboard", "safetensors", "mixtral", "axolotl", "generated_from_trainer", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:adapter:mistralai/Mixtral-8x7B-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2024-02-28T19:12:54Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: mistralai/Mixtral-8x7B-v0.1 model-index: - name: mixtral-lora-v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: mistralai/Mixtral-8x7B-v0.1 model_type: AutoModelForCausalLM tokenizer_type: LlamaTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: true strict: false chat_template: inst datasets: - path: ./data/tool_used_training.jsonl type: sharegpt conversation: mistral - path: ./data/tool_not_used_training.jsonl type: sharegpt conversation: mistral - path: ./data/no_tools_training.jsonl type: sharegpt conversation: mistral dataset_prepared_path: last_run_prepared val_set_size: 0.0 output_dir: ../mixtral-fc-v0 model_config: output_router_logits: true adapter: lora lora_model_dir: sequence_len: 4096 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: #lora_target_modules: # - gate # - q_proj # - k_proj # - v_proj # - o_proj # - w1 # - w2 # - w3 # wandb_project: function-call # wandb_name: mixtral-instruct-lora--v1 # wandb_log_model: end hub_model_id: dyang415/mixtral-lora-v0 gradient_accumulation_steps: 2 micro_batch_size: 1 num_epochs: 0.1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero2.json weight_decay: 0.0 fsdp: fsdp_config: ``` </details><br> # mixtral-lora-v0 This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: QuantizationMethod.BITS_AND_BYTES - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 0.1 ### Training results ### Framework versions - PEFT 0.7.0 - Transformers 4.37.0 - Pytorch 2.1.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.0
TIGER-Lab/StructLM-34B
TIGER-Lab
2024-02-28T19:40:17Z
9
15
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:TIGER-Lab/SKGInstruct", "arxiv:2402.16671", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-25T21:49:56Z
--- license: mit datasets: - TIGER-Lab/SKGInstruct language: - en --- # 🏗️ StructLM: Towards Building Generalist Models for Structured Knowledge Grounding Project Page: [https://tiger-ai-lab.github.io/StructLM/](https://tiger-ai-lab.github.io/StructLM/) Paper: [https://arxiv.org/pdf/2402.16671.pdf](https://arxiv.org/pdf/2402.16671.pdf) Code: [https://github.com/TIGER-AI-Lab/StructLM](https://github.com/TIGER-AI-Lab/StructLM) ![Alt text](https://raw.githubusercontent.com/TIGER-AI-Lab/StructLM/gh-pages/static/images/thumbnail.drawio.png) ## Introduction StructLM, is a series of open-source large language models (LLMs) finetuned for structured knowledge grounding (SKG) tasks. We release 3 models: 7B | [StructLM-7B](https://huggingface.co/TIGER-Lab/StructLM-7B) 13B | [StructLM-13B](https://huggingface.co/TIGER-Lab/StructLM-13B) 34B | [StructLM-34B](https://huggingface.co/TIGER-Lab/StructLM-34B) ## Training Data These models are trained on 🤗 [SKGInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/SKGInstruct), an instruction-tuning dataset containing mixture of 19 SKG tasks combined with 🤗 [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Check out the dataset card for more details. ## Training Procedure The models are fine-tuned with CodeLlama-Instruct-hf models as base models. Each model is trained for 3 epochs, and the best checkpoint is selected. ## Evaluation Here are a subset of model evaluation results: ### Held in | **Model** | **ToTTo** | **GrailQA** | **CompWebQ** | **MMQA** | **Feverous** | **Spider** | **TabFact** | **Dart** | |-----------------------|--------------|----------|----------|----------|----------|----------|----------|----------| | **StructLM-7B** | 49.4 | 80.4 | 78.3 | 85.2 | 84.4 | 72.4 | 80.8 | 62.2 | | **StructLM-13B** | 49.3 | 79.2 | 80.4 | 86.0 | 85.0 | 74.1 | 84.7 | 61.4 | | **StructLM-34B** | 50.2 | 82.2 | 81.9 | 88.1 | 85.7 | 74.6 | 86.6 | 61.8 | ### Held out | **Model** | **BIRD** | **InfoTabs** | **FinQA** | **SQA** | |-----------------------|--------------|----------|----------|----------| | **StructLM-7B** | 22.3 | 55.3 | 27.3 | 49.7 | | **StructLM-13B** | 22.8 | 58.1 | 25.6 | 36.1 | | **StructLM-34B** | 24.7 | 61.8 | 36.2 | 44.2 | ## Usage You can use the models through Huggingface's Transformers library. Check our Github repo for the evaluation code: [https://github.com/TIGER-AI-Lab/StructLM](https://github.com/TIGER-AI-Lab/StructLM) ## Prompt Format \*\*\***IMPORTANT**\*\*\* **For this 34B model, the prompt format (different from 7B) is** ``` [INST] [INST] <<SYS>> You are an AI assistant that specializes in analyzing and reasoning over structured information. You will be given a task, optionally with some structured knowledge input. Your answer must strictly adhere to the output format, if specified. <</SYS>> {instruction} [/INST] [/INST] ``` To see concrete examples of this linearization, you can directly reference the 🤗 [SKGInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/SKGInstruct) (coming soon). We will provide code for linearizing this data shortly. A few examples: **tabular data** ``` col : day | kilometers row 1 : tuesday | 0 row 2 : wednesday | 0 row 3 : thursday | 4 row 4 : friday | 0 row 5 : saturday | 0 ``` **knowledge triples (dart)** ``` Hawaii Five-O : notes : Episode: The Flight of the Jewels | [TABLECONTEXT] : [title] : Jeff Daniels | [TABLECONTEXT] : title : Hawaii Five-O ``` **knowledge graph schema (grailqa)** ``` top antiquark: m.094nrqp | physics.particle_antiparticle.self_antiparticle physics.particle_family physics.particle.antiparticle physics.particle_family.subclasses physics.subatomic_particle_generation physics.particle_family.particles physics.particle common.image.appears_in_topic_gallery physics.subatomic_particle_generation.particles physics.particle.family physics.particle_family.parent_class physics.particle_antiparticle physics.particle_antiparticle.particle physics.particle.generation ``` **example input** ``` [INST] [INST] <<SYS>> You are an AI assistant that specializes in analyzing and reasoning over structured information. You will be given a task, optionally with some structured knowledge input. Your answer must strictly adhere to the output format, if specified. <</SYS>> Use the information in the following table to solve the problem, choose between the choices if they are provided. table: col : day | kilometers row 1 : tuesday | 0 row 2 : wednesday | 0 row 3 : thursday | 4 row 4 : friday | 0 row 5 : saturday | 0 question: Allie kept track of how many kilometers she walked during the past 5 days. What is the range of the numbers? [/INST] [/INST] ``` ## Intended Uses These models are trained for research purposes. They are designed to be proficient in interpreting linearized structured input. Downstream uses can potentially include various applications requiring the interpretation of structured data. ## Limitations While we've tried to build an SKG-specialized model capable of generalizing, we have shown that this is a challenging domain, and it may lack performance characteristics that allow it to be directly used in chat or other applications. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @misc{zhuang2024structlm, title={StructLM: Towards Building Generalist Models for Structured Knowledge Grounding}, author={Alex Zhuang and Ge Zhang and Tianyu Zheng and Xinrun Du and Junjie Wang and Weiming Ren and Stephen W. Huang and Jie Fu and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2402.16671}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ankhamun/Ixo_0-0_oxI
ankhamun
2024-02-28T19:29:53Z
115
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:27:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
Epiculous
2024-02-28T19:29:24Z
8
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-06T18:28:38Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # Fett-uccine-Long-Noodle-7B-120k-Context This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details A merge with Fett-uccine and Mistral Yarn 120k ctx. Credit to Nitral for the merge script and idea. ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * Z:\ModelColdStorage\Yarn-Mistral-7b-128k * Z:\ModelColdStorage\Fett-uccine-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Z:\ModelColdStorage\Fett-uccine-7B layer_range: [0, 32] - model: Z:\ModelColdStorage\Yarn-Mistral-7b-128k layer_range: [0, 32] merge_method: slerp base_model: Z:\ModelColdStorage\Fett-uccine-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
dgunzy/capstone-t5-flan-summerizer
dgunzy
2024-02-28T19:20:48Z
106
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:billsum", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-21T15:04:47Z
--- license: apache-2.0 datasets: - billsum --- A google-flan-t5-large trained on the billsum dataset for document summerization. Followed https://www.youtube.com/watch?v=tc87-ZKWm78&list=LL&index=4 video.
Jahongir94/uzbert
Jahongir94
2024-02-28T19:18:18Z
0
0
transformers
[ "transformers", "text-classification", "dataset:Ravshan/kun_uz_news", "dataset:s3h/custom-qalb-classification", "dataset:tahrirchi/uz-crawl", "dataset:tahrirchi/uz-books", "dataset:latofat/uzpos", "dataset:Sanatbek/uzbek-kazakh-parallel-corpora", "dataset:elmurod1202/uzbek-sentiment-analysis", "dataset:murodbek/uz-text-classification", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T19:07:49Z
--- datasets: - Ravshan/kun_uz_news - s3h/custom-qalb-classification - tahrirchi/uz-crawl - tahrirchi/uz-books - latofat/uzpos - Sanatbek/uzbek-kazakh-parallel-corpora - elmurod1202/uzbek-sentiment-analysis - murodbek/uz-text-classification metrics: - bertscore library_name: transformers pipeline_tag: text-classification ---
ArchiveAI/Thespis-CurtainCall-7b-v0.1.2
ArchiveAI
2024-02-28T19:17:53Z
1
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:17:53Z
--- license: cc-by-nc-4.0 --- This model is the first in a series of experiments to make my models a bit smarter. Its nowhere near done, but my initial testing was good so I'm uploading so people can check it out. Datasets Used: * OpenOrcaSlim * Dolphin * Capybara * Augmental * ToxicQA * Magiccoder-Evol-Instruct-110k ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template ) ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` ## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.03) ## Recommended Kobold Horde Preset -> MinP
lucio/wav2vec2-large-xlsr-luganda
lucio
2024-02-28T19:15:11Z
19
1
transformers
[ "transformers", "pytorch", "jax", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "lg", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: lg datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large Luganda by Lucio results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice lg type: common_voice args: lg metrics: - name: Test WER type: wer value: 29.52 --- # Wav2Vec2-Large-XLSR-53-lg Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using train, validation and other (excluding voices that are in the test set), and taking the test data for validation as well as test. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "lg", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Luganda test data of Common Voice. (Available in Colab [here](https://colab.research.google.com/drive/1XxZ3mJOEXwIn-QH3C23jD_Qpom9aA1vH?usp=sharing).) ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import unidecode test_dataset = load_dataset("common_voice", "lg", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda") model.to("cuda") chars_to_ignore_regex = '[\[\],?.!;:%"“”(){}‟ˮʺ″«»/…‽�–]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch def remove_special_characters(batch): # word-internal apostrophes are marking contractions batch["norm_text"] = re.sub(r'[‘’´`]', r"'", batch["sentence"]) # most other punctuation is ignored batch["norm_text"] = re.sub(chars_to_ignore_regex, "", batch["norm_text"]).lower().strip() batch["norm_text"] = re.sub(r"(-|' | '| +)", " ", batch["norm_text"]) # remove accents from a few characters (from loanwords, not tones) batch["norm_text"] = unidecode.unidecode(batch["norm_text"]) return batch test_dataset = test_dataset.map(speech_file_to_array_fn) test_dataset = test_dataset.map(remove_special_characters) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["norm_text"]))) ``` **Test Result**: 29.52 % ## Training The Common Voice `train`, `validation` and `other` datasets were used for training, excluding voices that are in both the `other` and `test` datasets. The data was augmented to twice the original size with added noise and manipulated pitch, phase and intensity. Training proceeded for 60 epochs, on 1 V100 GPU provided by OVHcloud. The `test` data was used for validation. The [script used for training](https://github.com/serapio/transformers/blob/feature/xlsr-finetune/examples/research_projects/wav2vec2/run_common_voice.py) is adapted from the [example script provided in the transformers repo](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py).
Emperor-WS/a2c-PandaReachDense-v3
Emperor-WS
2024-02-28T19:13:41Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-02-28T19:09:37Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.18 +/- 0.07 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Lienid/nous-thirteen
Lienid
2024-02-28T19:12:12Z
115
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:09:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
peldrak/segformer-b3-cityscapes-512-512-finetuned-coastTrain_50epochs
peldrak
2024-02-28T19:11:41Z
191
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/segformer-b3-finetuned-cityscapes-1024-1024", "base_model:finetune:nvidia/segformer-b3-finetuned-cityscapes-1024-1024", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2024-02-27T17:51:59Z
--- license: other base_model: nvidia/segformer-b3-finetuned-cityscapes-1024-1024 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b3-cityscapes-512-512-finetuned-coastTrain_50epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b3-cityscapes-512-512-finetuned-coastTrain_50epochs This model is a fine-tuned version of [nvidia/segformer-b3-finetuned-cityscapes-1024-1024](https://huggingface.co/nvidia/segformer-b3-finetuned-cityscapes-1024-1024) on the peldrak/coastTrain_512-512 dataset. It achieves the following results on the evaluation set: - Loss: 0.5328 - Mean Iou: 0.7402 - Mean Accuracy: 0.8345 - Overall Accuracy: 0.8995 - Accuracy Water: 0.9472 - Accuracy Whitewater: 0.8139 - Accuracy Sediment: 0.8796 - Accuracy Other Natural Terrain: 0.5587 - Accuracy Vegetation: 0.9175 - Accuracy Development: 0.8744 - Accuracy Unknown: 0.8503 - Iou Water: 0.8830 - Iou Whitewater: 0.6854 - Iou Sediment: 0.7777 - Iou Other Natural Terrain: 0.4534 - Iou Vegetation: 0.8340 - Iou Development: 0.7384 - Iou Unknown: 0.8091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Water | Accuracy Whitewater | Accuracy Sediment | Accuracy Other Natural Terrain | Accuracy Vegetation | Accuracy Development | Accuracy Unknown | Iou Water | Iou Whitewater | Iou Sediment | Iou Other Natural Terrain | Iou Vegetation | Iou Development | Iou Unknown | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------:|:-------------------:|:-----------------:|:------------------------------:|:-------------------:|:--------------------:|:----------------:|:---------:|:--------------:|:------------:|:-------------------------:|:--------------:|:---------------:|:-----------:| | 1.819 | 0.05 | 20 | 1.6652 | 0.1742 | 0.2778 | 0.4848 | 0.5623 | 0.0179 | 0.3709 | 0.0031 | 0.8151 | 0.1633 | 0.0121 | 0.3821 | 0.0134 | 0.2639 | 0.0029 | 0.4354 | 0.1100 | 0.0116 | | 1.7261 | 0.11 | 40 | 1.3937 | 0.2544 | 0.3553 | 0.6008 | 0.7592 | 0.0293 | 0.1060 | 0.0001 | 0.8580 | 0.4454 | 0.2892 | 0.5488 | 0.0283 | 0.0911 | 0.0001 | 0.4885 | 0.3368 | 0.2871 | | 1.2623 | 0.16 | 60 | 1.2644 | 0.2652 | 0.3629 | 0.6090 | 0.6938 | 0.0572 | 0.0832 | 0.0000 | 0.9639 | 0.4355 | 0.3063 | 0.5505 | 0.0565 | 0.0759 | 0.0000 | 0.4960 | 0.3729 | 0.3048 | | 1.0987 | 0.22 | 80 | 1.1180 | 0.2889 | 0.3841 | 0.6546 | 0.7972 | 0.0001 | 0.0538 | 0.0 | 0.9640 | 0.4985 | 0.3754 | 0.6099 | 0.0001 | 0.0520 | 0.0 | 0.5473 | 0.4394 | 0.3734 | | 1.1484 | 0.27 | 100 | 1.0266 | 0.3212 | 0.4211 | 0.6828 | 0.8465 | 0.0015 | 0.1904 | 0.0 | 0.9195 | 0.5661 | 0.4237 | 0.6575 | 0.0015 | 0.1690 | 0.0 | 0.5755 | 0.4459 | 0.3990 | | 0.8779 | 0.32 | 120 | 0.9490 | 0.3473 | 0.4439 | 0.7077 | 0.8838 | 0.0 | 0.3156 | 0.0 | 0.9316 | 0.5962 | 0.3798 | 0.6657 | 0.0 | 0.2738 | 0.0 | 0.6232 | 0.4931 | 0.3751 | | 1.103 | 0.38 | 140 | 0.8613 | 0.3941 | 0.5050 | 0.7409 | 0.8917 | 0.0 | 0.5959 | 0.0 | 0.8945 | 0.7355 | 0.4175 | 0.7125 | 0.0 | 0.4510 | 0.0 | 0.6530 | 0.5313 | 0.4111 | | 1.4642 | 0.43 | 160 | 0.8631 | 0.3867 | 0.5084 | 0.7286 | 0.8705 | 0.0007 | 0.5328 | 0.0 | 0.8461 | 0.8428 | 0.4660 | 0.6951 | 0.0007 | 0.4323 | 0.0 | 0.6318 | 0.5188 | 0.4284 | | 1.058 | 0.49 | 180 | 0.8220 | 0.4019 | 0.5133 | 0.7434 | 0.8719 | 0.0000 | 0.5454 | 0.0 | 0.8929 | 0.7774 | 0.5052 | 0.7278 | 0.0000 | 0.4200 | 0.0 | 0.6344 | 0.5595 | 0.4717 | | 1.0423 | 0.54 | 200 | 0.8452 | 0.3867 | 0.5301 | 0.7195 | 0.8196 | 0.0104 | 0.6703 | 0.0 | 0.7908 | 0.8796 | 0.5399 | 0.7352 | 0.0104 | 0.4538 | 0.0 | 0.6000 | 0.4181 | 0.4894 | | 0.8507 | 0.59 | 220 | 0.7354 | 0.4301 | 0.5454 | 0.7661 | 0.9063 | 0.0308 | 0.7024 | 0.0 | 0.8734 | 0.8135 | 0.4916 | 0.7511 | 0.0307 | 0.5356 | 0.0 | 0.6695 | 0.5479 | 0.4758 | | 1.304 | 0.65 | 240 | 0.7374 | 0.4177 | 0.5408 | 0.7606 | 0.8990 | 0.0030 | 0.8217 | 0.0 | 0.8734 | 0.7723 | 0.4161 | 0.7410 | 0.0030 | 0.4974 | 0.0 | 0.6902 | 0.5776 | 0.4146 | | 0.7238 | 0.7 | 260 | 0.7586 | 0.4108 | 0.5126 | 0.7530 | 0.9048 | 0.0271 | 0.5367 | 0.0 | 0.9085 | 0.7206 | 0.4905 | 0.7142 | 0.0270 | 0.3574 | 0.0 | 0.6964 | 0.5990 | 0.4819 | | 0.9137 | 0.76 | 280 | 0.6967 | 0.4678 | 0.5741 | 0.7804 | 0.8805 | 0.2221 | 0.8397 | 0.0 | 0.9340 | 0.6376 | 0.5046 | 0.7705 | 0.2139 | 0.5748 | 0.0 | 0.6791 | 0.5415 | 0.4949 | | 0.5555 | 0.81 | 300 | 0.6864 | 0.4752 | 0.5933 | 0.7788 | 0.9220 | 0.2630 | 0.8194 | 0.0 | 0.8287 | 0.8053 | 0.5147 | 0.7723 | 0.2512 | 0.6276 | 0.0 | 0.6633 | 0.5219 | 0.4899 | | 1.7594 | 0.86 | 320 | 0.6829 | 0.5031 | 0.6023 | 0.7944 | 0.8926 | 0.3306 | 0.7444 | 0.0 | 0.9259 | 0.7647 | 0.5581 | 0.7768 | 0.3155 | 0.5980 | 0.0 | 0.6855 | 0.6226 | 0.5237 | | 0.7284 | 0.92 | 340 | 0.6587 | 0.5048 | 0.6187 | 0.8047 | 0.9176 | 0.3054 | 0.8405 | 0.0 | 0.8949 | 0.8480 | 0.5245 | 0.8080 | 0.2814 | 0.6013 | 0.0 | 0.7106 | 0.6180 | 0.5140 | | 0.5838 | 0.97 | 360 | 0.6600 | 0.5134 | 0.6207 | 0.7979 | 0.8778 | 0.3782 | 0.8248 | 0.0 | 0.9162 | 0.7745 | 0.5737 | 0.7797 | 0.3487 | 0.6325 | 0.0 | 0.6911 | 0.6161 | 0.5257 | | 0.9766 | 1.03 | 380 | 0.6475 | 0.5219 | 0.6364 | 0.8039 | 0.8934 | 0.4583 | 0.8557 | 0.0 | 0.9113 | 0.8205 | 0.5153 | 0.7982 | 0.3948 | 0.6072 | 0.0 | 0.7020 | 0.6485 | 0.5029 | | 0.8671 | 1.08 | 400 | 0.6275 | 0.5254 | 0.6242 | 0.8089 | 0.9260 | 0.4626 | 0.7961 | 0.0 | 0.9307 | 0.7423 | 0.5118 | 0.8011 | 0.4163 | 0.6109 | 0.0 | 0.7127 | 0.6299 | 0.5066 | | 0.7227 | 1.14 | 420 | 0.6668 | 0.4999 | 0.6444 | 0.7820 | 0.9362 | 0.4977 | 0.8845 | 0.0 | 0.7314 | 0.9097 | 0.5511 | 0.7992 | 0.4354 | 0.5505 | 0.0 | 0.6543 | 0.5328 | 0.5270 | | 0.4667 | 1.19 | 440 | 0.6013 | 0.5509 | 0.6651 | 0.8174 | 0.9204 | 0.6158 | 0.8319 | 0.0 | 0.8903 | 0.8318 | 0.5657 | 0.8193 | 0.5118 | 0.6563 | 0.0 | 0.7104 | 0.6220 | 0.5369 | | 0.3519 | 1.24 | 460 | 0.6313 | 0.5253 | 0.6338 | 0.8098 | 0.9256 | 0.4552 | 0.8406 | 0.0 | 0.9058 | 0.7953 | 0.5140 | 0.8129 | 0.4100 | 0.6318 | 0.0 | 0.7163 | 0.6222 | 0.4837 | | 0.4907 | 1.3 | 480 | 0.6214 | 0.5544 | 0.6702 | 0.8179 | 0.9216 | 0.6888 | 0.8569 | 0.0 | 0.9036 | 0.8041 | 0.5162 | 0.8214 | 0.5469 | 0.6614 | 0.0 | 0.7144 | 0.6474 | 0.4895 | | 0.8349 | 1.35 | 500 | 0.5884 | 0.5525 | 0.6684 | 0.8200 | 0.9304 | 0.6519 | 0.7855 | 0.0 | 0.8966 | 0.8772 | 0.5371 | 0.8157 | 0.5241 | 0.6454 | 0.0 | 0.7289 | 0.6336 | 0.5200 | | 0.466 | 1.41 | 520 | 0.5810 | 0.5584 | 0.6869 | 0.8223 | 0.9170 | 0.7382 | 0.8488 | 0.0 | 0.8946 | 0.8842 | 0.5251 | 0.8226 | 0.5444 | 0.6739 | 0.0 | 0.7328 | 0.6228 | 0.5122 | | 0.3783 | 1.46 | 540 | 0.5925 | 0.5628 | 0.6718 | 0.8224 | 0.9208 | 0.7262 | 0.8069 | 0.0 | 0.9286 | 0.7863 | 0.5338 | 0.8273 | 0.5618 | 0.6620 | 0.0 | 0.7098 | 0.6503 | 0.5284 | | 0.3865 | 1.51 | 560 | 0.5658 | 0.5667 | 0.6738 | 0.8245 | 0.9307 | 0.6871 | 0.8258 | 0.0 | 0.9056 | 0.8081 | 0.5596 | 0.8241 | 0.5652 | 0.6655 | 0.0 | 0.7164 | 0.6599 | 0.5357 | | 0.4578 | 1.57 | 580 | 0.6584 | 0.5512 | 0.6699 | 0.8128 | 0.8999 | 0.6886 | 0.8235 | 0.0 | 0.9096 | 0.8558 | 0.5117 | 0.8117 | 0.5384 | 0.6498 | 0.0 | 0.6981 | 0.6603 | 0.5003 | | 0.6526 | 1.62 | 600 | 0.5924 | 0.5548 | 0.6852 | 0.8195 | 0.9181 | 0.8340 | 0.8738 | 0.0 | 0.9148 | 0.7688 | 0.4867 | 0.8145 | 0.5239 | 0.6711 | 0.0 | 0.7293 | 0.6684 | 0.4765 | | 0.3538 | 1.68 | 620 | 0.5656 | 0.5771 | 0.7144 | 0.8238 | 0.9087 | 0.7954 | 0.8683 | 0.0 | 0.8171 | 0.9212 | 0.6899 | 0.8350 | 0.6169 | 0.6723 | 0.0 | 0.6950 | 0.5883 | 0.6322 | | 0.4446 | 1.73 | 640 | 0.5482 | 0.5792 | 0.6772 | 0.8344 | 0.9201 | 0.6804 | 0.8613 | 0.0 | 0.9421 | 0.7160 | 0.6204 | 0.8385 | 0.5755 | 0.6636 | 0.0 | 0.7278 | 0.6486 | 0.6006 | | 1.166 | 1.78 | 660 | 0.5782 | 0.5635 | 0.6885 | 0.8219 | 0.9064 | 0.7127 | 0.8046 | 0.0 | 0.8803 | 0.9036 | 0.6116 | 0.8288 | 0.5596 | 0.6522 | 0.0 | 0.7119 | 0.6053 | 0.5867 | | 0.884 | 1.84 | 680 | 0.5683 | 0.5736 | 0.6866 | 0.8288 | 0.9399 | 0.7292 | 0.8414 | 0.0 | 0.8930 | 0.8646 | 0.5378 | 0.8202 | 0.5786 | 0.6709 | 0.0 | 0.7343 | 0.6961 | 0.5153 | | 0.4581 | 1.89 | 700 | 0.5838 | 0.5806 | 0.7038 | 0.8293 | 0.8860 | 0.7218 | 0.8881 | 0.0004 | 0.8881 | 0.8782 | 0.6638 | 0.8220 | 0.5841 | 0.6273 | 0.0004 | 0.7226 | 0.6812 | 0.6264 | | 0.6757 | 1.95 | 720 | 0.5951 | 0.5691 | 0.6813 | 0.8251 | 0.8657 | 0.6365 | 0.8109 | 0.0003 | 0.9162 | 0.7627 | 0.7766 | 0.7896 | 0.5237 | 0.6350 | 0.0003 | 0.7493 | 0.6716 | 0.6142 | | 1.8545 | 2.0 | 740 | 0.6812 | 0.5417 | 0.6664 | 0.8110 | 0.9312 | 0.6934 | 0.8289 | 0.0001 | 0.8877 | 0.8927 | 0.4311 | 0.8070 | 0.5438 | 0.6569 | 0.0001 | 0.7201 | 0.6370 | 0.4271 | | 0.4047 | 2.05 | 760 | 0.6205 | 0.5462 | 0.6655 | 0.8142 | 0.9071 | 0.7238 | 0.8775 | 0.0003 | 0.9271 | 0.7154 | 0.5073 | 0.8065 | 0.5525 | 0.5943 | 0.0003 | 0.7338 | 0.6352 | 0.5008 | | 0.4515 | 2.11 | 780 | 0.6132 | 0.5336 | 0.6761 | 0.8040 | 0.9321 | 0.8235 | 0.7424 | 0.0024 | 0.8579 | 0.9264 | 0.4481 | 0.8100 | 0.5738 | 0.6352 | 0.0024 | 0.7222 | 0.5543 | 0.4370 | | 0.5024 | 2.16 | 800 | 0.5489 | 0.5850 | 0.6942 | 0.8366 | 0.9482 | 0.6629 | 0.8108 | 0.0243 | 0.8626 | 0.8734 | 0.6769 | 0.8162 | 0.5509 | 0.6876 | 0.0243 | 0.7615 | 0.6729 | 0.5819 | | 1.6373 | 2.22 | 820 | 0.5656 | 0.5826 | 0.6976 | 0.8340 | 0.9300 | 0.7296 | 0.8727 | 0.0192 | 0.8916 | 0.8391 | 0.6010 | 0.8099 | 0.5747 | 0.6903 | 0.0192 | 0.7676 | 0.6776 | 0.5386 | | 0.281 | 2.27 | 840 | 0.5156 | 0.6001 | 0.7189 | 0.8436 | 0.9102 | 0.7545 | 0.8683 | 0.0227 | 0.8791 | 0.8366 | 0.7610 | 0.8331 | 0.5925 | 0.6534 | 0.0226 | 0.7548 | 0.6809 | 0.6633 | | 0.579 | 2.32 | 860 | 0.5796 | 0.5517 | 0.6869 | 0.8157 | 0.9329 | 0.7382 | 0.8211 | 0.0321 | 0.8545 | 0.9163 | 0.5132 | 0.8326 | 0.5684 | 0.6283 | 0.0319 | 0.7276 | 0.5904 | 0.4827 | | 0.2339 | 2.38 | 880 | 0.5204 | 0.5813 | 0.7008 | 0.8328 | 0.9224 | 0.8078 | 0.8093 | 0.0844 | 0.9376 | 0.8072 | 0.5366 | 0.8388 | 0.5544 | 0.6906 | 0.0840 | 0.7456 | 0.6362 | 0.5193 | | 0.7328 | 2.43 | 900 | 0.5233 | 0.6044 | 0.7107 | 0.8414 | 0.9138 | 0.7215 | 0.8202 | 0.0809 | 0.9137 | 0.8555 | 0.6694 | 0.8436 | 0.5932 | 0.6987 | 0.0803 | 0.7384 | 0.6870 | 0.5898 | | 0.9982 | 2.49 | 920 | 0.5630 | 0.5893 | 0.7151 | 0.8309 | 0.9000 | 0.7179 | 0.8320 | 0.0853 | 0.8590 | 0.8913 | 0.7203 | 0.8372 | 0.5638 | 0.6829 | 0.0850 | 0.7223 | 0.6385 | 0.5955 | | 0.3926 | 2.54 | 940 | 0.5589 | 0.5858 | 0.6842 | 0.8358 | 0.9225 | 0.6172 | 0.8171 | 0.0327 | 0.9131 | 0.8245 | 0.6622 | 0.8345 | 0.5369 | 0.6959 | 0.0325 | 0.7336 | 0.6944 | 0.5729 | | 0.5018 | 2.59 | 960 | 0.5172 | 0.6177 | 0.7248 | 0.8581 | 0.9236 | 0.6985 | 0.8513 | 0.0383 | 0.8975 | 0.8602 | 0.8042 | 0.8494 | 0.5810 | 0.6725 | 0.0380 | 0.7836 | 0.7077 | 0.6915 | | 0.2735 | 2.65 | 980 | 0.5515 | 0.5864 | 0.6963 | 0.8387 | 0.9344 | 0.7383 | 0.8665 | 0.0379 | 0.9330 | 0.8329 | 0.5309 | 0.8461 | 0.5993 | 0.7137 | 0.0375 | 0.7541 | 0.6296 | 0.5248 | | 0.3839 | 2.7 | 1000 | 0.5652 | 0.5803 | 0.7092 | 0.8326 | 0.9161 | 0.7864 | 0.8964 | 0.0354 | 0.8980 | 0.8822 | 0.5497 | 0.8380 | 0.6092 | 0.6727 | 0.0352 | 0.7549 | 0.6311 | 0.5212 | | 0.3531 | 2.76 | 1020 | 0.5669 | 0.5816 | 0.6981 | 0.8334 | 0.9352 | 0.7396 | 0.8690 | 0.0381 | 0.8979 | 0.8546 | 0.5521 | 0.8383 | 0.6134 | 0.6611 | 0.0379 | 0.7476 | 0.6373 | 0.5354 | | 0.2312 | 2.81 | 1040 | 0.5318 | 0.6043 | 0.7191 | 0.8460 | 0.9259 | 0.7778 | 0.8476 | 0.0471 | 0.9046 | 0.8812 | 0.6496 | 0.8397 | 0.5853 | 0.6739 | 0.0466 | 0.7573 | 0.7163 | 0.6112 | | 0.5955 | 2.86 | 1060 | 0.5730 | 0.5972 | 0.7101 | 0.8395 | 0.9149 | 0.8154 | 0.8389 | 0.0332 | 0.9251 | 0.8266 | 0.6164 | 0.8400 | 0.6176 | 0.6864 | 0.0330 | 0.7414 | 0.6908 | 0.5712 | | 0.4457 | 2.92 | 1080 | 0.4992 | 0.6089 | 0.7267 | 0.8455 | 0.9262 | 0.8220 | 0.8642 | 0.0595 | 0.8999 | 0.8946 | 0.6206 | 0.8421 | 0.6105 | 0.6996 | 0.0571 | 0.7504 | 0.7132 | 0.5892 | | 0.2825 | 2.97 | 1100 | 0.5145 | 0.6007 | 0.7105 | 0.8398 | 0.9360 | 0.7366 | 0.8729 | 0.0694 | 0.8921 | 0.8717 | 0.5949 | 0.8487 | 0.6017 | 0.7048 | 0.0661 | 0.7303 | 0.6948 | 0.5583 | | 0.3274 | 3.03 | 1120 | 0.5493 | 0.6054 | 0.7231 | 0.8404 | 0.9359 | 0.7684 | 0.8952 | 0.1101 | 0.8797 | 0.8864 | 0.5857 | 0.8487 | 0.6156 | 0.6978 | 0.1036 | 0.7379 | 0.6797 | 0.5544 | | 0.8034 | 3.08 | 1140 | 0.5433 | 0.6114 | 0.7238 | 0.8446 | 0.9268 | 0.7549 | 0.8444 | 0.0928 | 0.8886 | 0.8996 | 0.6598 | 0.8461 | 0.6118 | 0.7035 | 0.0899 | 0.7415 | 0.6758 | 0.6111 | | 0.5863 | 3.14 | 1160 | 0.5395 | 0.6096 | 0.7286 | 0.8430 | 0.9193 | 0.8539 | 0.8307 | 0.0950 | 0.9129 | 0.8827 | 0.6058 | 0.8445 | 0.6220 | 0.7115 | 0.0936 | 0.7445 | 0.6776 | 0.5734 | | 0.2249 | 3.19 | 1180 | 0.5066 | 0.6122 | 0.7183 | 0.8485 | 0.9421 | 0.7691 | 0.8629 | 0.0716 | 0.9124 | 0.8727 | 0.5972 | 0.8503 | 0.6324 | 0.7082 | 0.0700 | 0.7552 | 0.6834 | 0.5855 | | 0.2115 | 3.24 | 1200 | 0.5109 | 0.6203 | 0.7304 | 0.8464 | 0.9371 | 0.8116 | 0.8596 | 0.1329 | 0.9039 | 0.8805 | 0.5874 | 0.8511 | 0.6471 | 0.7280 | 0.1271 | 0.7420 | 0.6726 | 0.5740 | | 0.3939 | 3.3 | 1220 | 0.5001 | 0.6297 | 0.7418 | 0.8542 | 0.9117 | 0.7707 | 0.8710 | 0.1207 | 0.8892 | 0.8420 | 0.7873 | 0.8468 | 0.6324 | 0.7056 | 0.1168 | 0.7677 | 0.6723 | 0.6662 | | 0.3146 | 3.35 | 1240 | 0.4795 | 0.6310 | 0.7451 | 0.8530 | 0.9185 | 0.8271 | 0.8393 | 0.1369 | 0.8968 | 0.8735 | 0.7238 | 0.8391 | 0.6344 | 0.7031 | 0.1302 | 0.7666 | 0.6911 | 0.6527 | | 1.0368 | 3.41 | 1260 | 0.4918 | 0.6261 | 0.7292 | 0.8565 | 0.9118 | 0.6879 | 0.8536 | 0.1286 | 0.9225 | 0.8397 | 0.7607 | 0.8381 | 0.5896 | 0.6575 | 0.1200 | 0.7852 | 0.6613 | 0.7312 | | 0.5795 | 3.46 | 1280 | 0.6112 | 0.6323 | 0.7602 | 0.8406 | 0.8351 | 0.7382 | 0.9195 | 0.2488 | 0.8829 | 0.8442 | 0.8526 | 0.7706 | 0.6128 | 0.6311 | 0.2293 | 0.7889 | 0.6936 | 0.6997 | | 0.3686 | 3.51 | 1300 | 0.4967 | 0.6613 | 0.7782 | 0.8612 | 0.8786 | 0.8180 | 0.8636 | 0.2660 | 0.9012 | 0.8664 | 0.8536 | 0.8298 | 0.6318 | 0.7420 | 0.2460 | 0.7798 | 0.6828 | 0.7166 | | 0.2349 | 3.57 | 1320 | 0.4953 | 0.6342 | 0.7357 | 0.8544 | 0.9391 | 0.7633 | 0.8615 | 0.1727 | 0.9151 | 0.8668 | 0.6312 | 0.8478 | 0.6300 | 0.7239 | 0.1561 | 0.7633 | 0.7080 | 0.6104 | | 0.4217 | 3.62 | 1340 | 0.4766 | 0.6516 | 0.7475 | 0.8632 | 0.9384 | 0.7338 | 0.8651 | 0.2318 | 0.9191 | 0.8396 | 0.7048 | 0.8575 | 0.6185 | 0.7234 | 0.2089 | 0.7709 | 0.7040 | 0.6780 | | 0.2898 | 3.68 | 1360 | 0.4637 | 0.6722 | 0.7814 | 0.8661 | 0.9278 | 0.8127 | 0.8470 | 0.3403 | 0.8946 | 0.9104 | 0.7372 | 0.8580 | 0.6420 | 0.7329 | 0.3011 | 0.7670 | 0.6921 | 0.7123 | | 0.5545 | 3.73 | 1380 | 0.4646 | 0.6755 | 0.7825 | 0.8681 | 0.9449 | 0.8086 | 0.8611 | 0.3592 | 0.8876 | 0.9017 | 0.7146 | 0.8599 | 0.6513 | 0.7467 | 0.3089 | 0.7765 | 0.7021 | 0.6829 | | 0.31 | 3.78 | 1400 | 0.4794 | 0.6748 | 0.7882 | 0.8650 | 0.9185 | 0.7609 | 0.8412 | 0.4449 | 0.8881 | 0.8753 | 0.7886 | 0.8540 | 0.6263 | 0.7287 | 0.3347 | 0.7736 | 0.7031 | 0.7030 | | 0.2665 | 3.84 | 1420 | 0.5223 | 0.6480 | 0.7634 | 0.8498 | 0.9196 | 0.7413 | 0.8567 | 0.3999 | 0.8926 | 0.8812 | 0.6523 | 0.8493 | 0.6271 | 0.7190 | 0.3217 | 0.7562 | 0.6703 | 0.5921 | | 0.5818 | 3.89 | 1440 | 0.5043 | 0.6629 | 0.7642 | 0.8661 | 0.9377 | 0.8163 | 0.8686 | 0.2610 | 0.9064 | 0.8081 | 0.7512 | 0.8516 | 0.6515 | 0.7271 | 0.2454 | 0.7841 | 0.6935 | 0.6869 | | 1.2037 | 3.95 | 1460 | 0.5396 | 0.6400 | 0.7395 | 0.8563 | 0.9248 | 0.6675 | 0.8409 | 0.2370 | 0.9066 | 0.8677 | 0.7322 | 0.8536 | 0.5725 | 0.7438 | 0.2230 | 0.7626 | 0.6881 | 0.6363 | | 0.317 | 4.0 | 1480 | 0.5300 | 0.6343 | 0.7373 | 0.8508 | 0.9009 | 0.6888 | 0.8873 | 0.2465 | 0.9246 | 0.7886 | 0.7244 | 0.8342 | 0.5893 | 0.7064 | 0.2198 | 0.7699 | 0.6932 | 0.6275 | | 0.2423 | 4.05 | 1500 | 0.4636 | 0.6769 | 0.7843 | 0.8695 | 0.9377 | 0.8186 | 0.8722 | 0.3433 | 0.8857 | 0.8647 | 0.7680 | 0.8525 | 0.6555 | 0.7449 | 0.2920 | 0.7846 | 0.6902 | 0.7185 | | 0.2467 | 4.11 | 1520 | 0.5415 | 0.6571 | 0.7648 | 0.8559 | 0.9488 | 0.7661 | 0.8346 | 0.4329 | 0.9052 | 0.8605 | 0.6053 | 0.8432 | 0.6481 | 0.7337 | 0.3121 | 0.7753 | 0.6935 | 0.5938 | | 0.4379 | 4.16 | 1540 | 0.4933 | 0.6634 | 0.7775 | 0.8623 | 0.9158 | 0.6951 | 0.8217 | 0.4249 | 0.8799 | 0.8754 | 0.8299 | 0.8526 | 0.5809 | 0.7307 | 0.3199 | 0.7743 | 0.6682 | 0.7170 | | 0.3536 | 4.22 | 1560 | 0.4949 | 0.6745 | 0.7931 | 0.8676 | 0.9322 | 0.8047 | 0.8617 | 0.3888 | 0.8572 | 0.8813 | 0.8260 | 0.8556 | 0.6337 | 0.7269 | 0.3072 | 0.7806 | 0.6859 | 0.7315 | | 0.6592 | 4.27 | 1580 | 0.4814 | 0.6780 | 0.7867 | 0.8706 | 0.9250 | 0.7844 | 0.8565 | 0.3672 | 0.8897 | 0.8674 | 0.8165 | 0.8577 | 0.6410 | 0.7296 | 0.3088 | 0.7861 | 0.6844 | 0.7382 | | 0.1168 | 4.32 | 1600 | 0.4972 | 0.6820 | 0.7779 | 0.8739 | 0.9281 | 0.7759 | 0.8545 | 0.3401 | 0.9138 | 0.8035 | 0.8295 | 0.8549 | 0.6408 | 0.7396 | 0.3046 | 0.7934 | 0.7084 | 0.7320 | | 0.5383 | 4.38 | 1620 | 0.5760 | 0.6567 | 0.7759 | 0.8517 | 0.9037 | 0.7389 | 0.8502 | 0.4652 | 0.8966 | 0.8831 | 0.6934 | 0.8432 | 0.6143 | 0.7389 | 0.3270 | 0.7522 | 0.6894 | 0.6318 | | 0.3261 | 4.43 | 1640 | 0.4663 | 0.6973 | 0.8008 | 0.8775 | 0.9392 | 0.7674 | 0.8570 | 0.4915 | 0.8951 | 0.8439 | 0.8115 | 0.8614 | 0.6479 | 0.7551 | 0.3742 | 0.7961 | 0.6990 | 0.7474 | | 0.2648 | 4.49 | 1660 | 0.4805 | 0.6741 | 0.7917 | 0.8662 | 0.9199 | 0.8188 | 0.8600 | 0.4445 | 0.9023 | 0.8431 | 0.7536 | 0.8604 | 0.6421 | 0.7428 | 0.3165 | 0.7821 | 0.6823 | 0.6924 | | 0.1427 | 4.54 | 1680 | 0.5141 | 0.6563 | 0.7721 | 0.8577 | 0.9476 | 0.7653 | 0.8602 | 0.4278 | 0.8899 | 0.8945 | 0.6194 | 0.8543 | 0.6368 | 0.7375 | 0.3273 | 0.7831 | 0.6539 | 0.6008 | | 0.1841 | 4.59 | 1700 | 0.4819 | 0.6703 | 0.7890 | 0.8629 | 0.9453 | 0.7390 | 0.8606 | 0.5217 | 0.8662 | 0.8509 | 0.7390 | 0.8563 | 0.6235 | 0.7461 | 0.2843 | 0.7751 | 0.7058 | 0.7009 | | 0.2197 | 4.65 | 1720 | 0.4670 | 0.6841 | 0.7855 | 0.8723 | 0.9391 | 0.7990 | 0.8506 | 0.4233 | 0.9181 | 0.8333 | 0.7349 | 0.8618 | 0.6491 | 0.7432 | 0.3381 | 0.7882 | 0.7047 | 0.7035 | | 0.3657 | 4.7 | 1740 | 0.4965 | 0.6677 | 0.7736 | 0.8663 | 0.9329 | 0.7896 | 0.8883 | 0.3171 | 0.8923 | 0.8300 | 0.7653 | 0.8587 | 0.6569 | 0.7141 | 0.2648 | 0.7806 | 0.7081 | 0.6907 | | 0.2839 | 4.76 | 1760 | 0.5082 | 0.6600 | 0.7796 | 0.8537 | 0.9354 | 0.8088 | 0.8800 | 0.4487 | 0.8839 | 0.8953 | 0.6054 | 0.8483 | 0.6592 | 0.7410 | 0.3760 | 0.7684 | 0.6376 | 0.5897 | | 0.5087 | 4.81 | 1780 | 0.4551 | 0.6947 | 0.8116 | 0.8746 | 0.9369 | 0.7739 | 0.8744 | 0.5254 | 0.8626 | 0.8863 | 0.8220 | 0.8595 | 0.6513 | 0.7324 | 0.3658 | 0.7883 | 0.6789 | 0.7869 | | 0.3757 | 4.86 | 1800 | 0.4415 | 0.6921 | 0.7932 | 0.8753 | 0.9337 | 0.8234 | 0.8570 | 0.4325 | 0.9221 | 0.8316 | 0.7524 | 0.8551 | 0.6557 | 0.7466 | 0.3492 | 0.7967 | 0.7233 | 0.7182 | | 0.4985 | 4.92 | 1820 | 0.4335 | 0.6991 | 0.8034 | 0.8790 | 0.9233 | 0.8422 | 0.8633 | 0.4008 | 0.9103 | 0.8772 | 0.8069 | 0.8529 | 0.6560 | 0.7576 | 0.3556 | 0.8018 | 0.7081 | 0.7615 | | 0.3946 | 4.97 | 1840 | 0.4248 | 0.7033 | 0.8104 | 0.8801 | 0.9262 | 0.8356 | 0.8432 | 0.4626 | 0.9059 | 0.8822 | 0.8168 | 0.8575 | 0.6567 | 0.7521 | 0.3797 | 0.8042 | 0.7187 | 0.7541 | | 0.1543 | 5.03 | 1860 | 0.4983 | 0.6654 | 0.7719 | 0.8651 | 0.9455 | 0.7695 | 0.8847 | 0.3822 | 0.9096 | 0.8585 | 0.6536 | 0.8525 | 0.6534 | 0.7317 | 0.2814 | 0.7991 | 0.7079 | 0.6315 | | 0.3259 | 5.08 | 1880 | 0.4685 | 0.6825 | 0.7957 | 0.8669 | 0.9465 | 0.7994 | 0.8415 | 0.5036 | 0.8811 | 0.8888 | 0.7091 | 0.8534 | 0.6490 | 0.7638 | 0.3841 | 0.7871 | 0.6831 | 0.6568 | | 0.2935 | 5.14 | 1900 | 0.4701 | 0.6867 | 0.7948 | 0.8708 | 0.9327 | 0.8312 | 0.8561 | 0.4515 | 0.9053 | 0.8376 | 0.7495 | 0.8512 | 0.6527 | 0.7666 | 0.3557 | 0.7929 | 0.6987 | 0.6889 | | 0.2034 | 5.19 | 1920 | 0.4550 | 0.7051 | 0.8169 | 0.8801 | 0.9276 | 0.8185 | 0.8720 | 0.5048 | 0.8895 | 0.8739 | 0.8320 | 0.8621 | 0.6570 | 0.7717 | 0.3746 | 0.7978 | 0.7094 | 0.7630 | | 0.4657 | 5.24 | 1940 | 0.5338 | 0.7000 | 0.8168 | 0.8738 | 0.9080 | 0.8181 | 0.8832 | 0.4949 | 0.8719 | 0.8608 | 0.8806 | 0.8537 | 0.6541 | 0.7400 | 0.4127 | 0.7836 | 0.7099 | 0.7462 | | 0.1772 | 5.3 | 1960 | 0.4634 | 0.6831 | 0.7860 | 0.8715 | 0.9208 | 0.7300 | 0.8915 | 0.4096 | 0.9065 | 0.8675 | 0.7758 | 0.8557 | 0.6223 | 0.7291 | 0.3492 | 0.7842 | 0.7078 | 0.7337 | | 0.2717 | 5.35 | 1980 | 0.4557 | 0.6976 | 0.7962 | 0.8799 | 0.9285 | 0.8214 | 0.8495 | 0.4140 | 0.9293 | 0.8366 | 0.7942 | 0.8591 | 0.6529 | 0.7577 | 0.3605 | 0.8069 | 0.6974 | 0.7490 | | 0.2144 | 5.41 | 2000 | 0.4535 | 0.6935 | 0.7926 | 0.8789 | 0.9360 | 0.8037 | 0.8675 | 0.3906 | 0.9112 | 0.8323 | 0.8070 | 0.8659 | 0.6591 | 0.7570 | 0.3530 | 0.8043 | 0.6809 | 0.7344 | | 0.5953 | 5.46 | 2020 | 0.4030 | 0.6948 | 0.7992 | 0.8787 | 0.9225 | 0.7925 | 0.8888 | 0.4236 | 0.9128 | 0.8392 | 0.8153 | 0.8673 | 0.6526 | 0.7638 | 0.3220 | 0.7958 | 0.7158 | 0.7464 | | 0.1702 | 5.51 | 2040 | 0.4188 | 0.6918 | 0.8004 | 0.8737 | 0.9093 | 0.8090 | 0.8767 | 0.4469 | 0.9199 | 0.8595 | 0.7816 | 0.8508 | 0.6627 | 0.7596 | 0.3438 | 0.8016 | 0.7211 | 0.7027 | | 0.2988 | 5.57 | 2060 | 0.5301 | 0.6836 | 0.7995 | 0.8633 | 0.8811 | 0.7637 | 0.8819 | 0.4693 | 0.8899 | 0.8596 | 0.8512 | 0.8094 | 0.6496 | 0.7485 | 0.3617 | 0.8092 | 0.7299 | 0.6772 | | 0.2987 | 5.62 | 2080 | 0.5280 | 0.6882 | 0.8027 | 0.8667 | 0.8863 | 0.7609 | 0.8462 | 0.5201 | 0.9077 | 0.8618 | 0.8360 | 0.8202 | 0.6382 | 0.7361 | 0.3991 | 0.8130 | 0.7273 | 0.6835 | | 0.1692 | 5.68 | 2100 | 0.5307 | 0.6851 | 0.7984 | 0.8670 | 0.8890 | 0.8436 | 0.8368 | 0.4040 | 0.9087 | 0.8793 | 0.8274 | 0.8159 | 0.6591 | 0.7381 | 0.3551 | 0.8176 | 0.7318 | 0.6780 | | 0.1263 | 5.73 | 2120 | 0.5442 | 0.6841 | 0.8038 | 0.8647 | 0.8565 | 0.8307 | 0.8781 | 0.4069 | 0.9090 | 0.8600 | 0.8851 | 0.8045 | 0.6518 | 0.7452 | 0.3355 | 0.8125 | 0.7468 | 0.6922 | | 0.5408 | 5.78 | 2140 | 0.4202 | 0.7017 | 0.8039 | 0.8818 | 0.9431 | 0.8012 | 0.8548 | 0.4229 | 0.8897 | 0.8906 | 0.8247 | 0.8623 | 0.6651 | 0.7599 | 0.3469 | 0.8051 | 0.6877 | 0.7845 | | 0.1841 | 5.84 | 2160 | 0.4853 | 0.6824 | 0.7780 | 0.8735 | 0.9525 | 0.7711 | 0.8728 | 0.3869 | 0.9176 | 0.8479 | 0.6972 | 0.8512 | 0.6538 | 0.7573 | 0.3085 | 0.8079 | 0.7262 | 0.6717 | | 0.1868 | 5.89 | 2180 | 0.5027 | 0.6762 | 0.7934 | 0.8686 | 0.9317 | 0.8252 | 0.8990 | 0.4336 | 0.9004 | 0.8540 | 0.7094 | 0.8648 | 0.6659 | 0.7207 | 0.3175 | 0.7932 | 0.6882 | 0.6827 | | 0.3223 | 5.95 | 2200 | 0.4496 | 0.7057 | 0.8101 | 0.8814 | 0.9262 | 0.7889 | 0.8571 | 0.4803 | 0.9010 | 0.8744 | 0.8427 | 0.8649 | 0.6600 | 0.7623 | 0.3747 | 0.7966 | 0.6841 | 0.7972 | | 0.1825 | 6.0 | 2220 | 0.4498 | 0.6969 | 0.7993 | 0.8779 | 0.9383 | 0.8241 | 0.8505 | 0.4503 | 0.9135 | 0.8471 | 0.7711 | 0.8625 | 0.6702 | 0.7520 | 0.3455 | 0.7973 | 0.7126 | 0.7380 | | 0.2738 | 6.05 | 2240 | 0.4840 | 0.6939 | 0.7944 | 0.8776 | 0.9366 | 0.8067 | 0.8489 | 0.4348 | 0.9244 | 0.8633 | 0.7459 | 0.8695 | 0.6675 | 0.7544 | 0.3469 | 0.7966 | 0.7043 | 0.7182 | | 0.2593 | 6.11 | 2260 | 0.5049 | 0.6699 | 0.7827 | 0.8660 | 0.9539 | 0.8025 | 0.8898 | 0.4326 | 0.8854 | 0.8144 | 0.7004 | 0.8615 | 0.6719 | 0.7159 | 0.2738 | 0.7929 | 0.6975 | 0.6757 | | 0.2305 | 6.16 | 2280 | 0.4865 | 0.6783 | 0.7893 | 0.8698 | 0.9378 | 0.8271 | 0.8663 | 0.4137 | 0.9061 | 0.8663 | 0.7076 | 0.8691 | 0.6749 | 0.7235 | 0.3301 | 0.7894 | 0.6759 | 0.6856 | | 0.3543 | 6.22 | 2300 | 0.5069 | 0.6821 | 0.7860 | 0.8720 | 0.9367 | 0.8107 | 0.8649 | 0.4387 | 0.9259 | 0.7976 | 0.7276 | 0.8603 | 0.6628 | 0.7296 | 0.3114 | 0.7960 | 0.7074 | 0.7072 | | 0.2266 | 6.27 | 2320 | 0.4808 | 0.6838 | 0.7911 | 0.8722 | 0.9515 | 0.7631 | 0.8498 | 0.4984 | 0.8985 | 0.8467 | 0.7296 | 0.8694 | 0.6478 | 0.7449 | 0.3663 | 0.7935 | 0.6558 | 0.7090 | | 0.2134 | 6.32 | 2340 | 0.4706 | 0.6736 | 0.7848 | 0.8685 | 0.9396 | 0.7247 | 0.9010 | 0.4622 | 0.8905 | 0.8326 | 0.7428 | 0.8673 | 0.6160 | 0.7185 | 0.3517 | 0.7866 | 0.6614 | 0.7134 | | 0.2823 | 6.38 | 2360 | 0.4619 | 0.6921 | 0.7957 | 0.8773 | 0.9422 | 0.8285 | 0.8589 | 0.4346 | 0.9187 | 0.8503 | 0.7367 | 0.8714 | 0.6754 | 0.7431 | 0.3636 | 0.8037 | 0.6763 | 0.7115 | | 0.2696 | 6.43 | 2380 | 0.4805 | 0.6892 | 0.8042 | 0.8720 | 0.9538 | 0.8098 | 0.8575 | 0.5327 | 0.8805 | 0.8762 | 0.7191 | 0.8677 | 0.6756 | 0.7480 | 0.3915 | 0.7976 | 0.6524 | 0.6915 | | 0.3536 | 6.49 | 2400 | 0.4269 | 0.7014 | 0.8034 | 0.8808 | 0.9368 | 0.8231 | 0.8803 | 0.4420 | 0.9091 | 0.8352 | 0.7972 | 0.8708 | 0.6696 | 0.7408 | 0.3771 | 0.7981 | 0.6910 | 0.7625 | | 0.2242 | 6.54 | 2420 | 0.4132 | 0.6971 | 0.8080 | 0.8786 | 0.9399 | 0.8311 | 0.8764 | 0.4806 | 0.8960 | 0.8431 | 0.7889 | 0.8710 | 0.6565 | 0.7415 | 0.3596 | 0.7952 | 0.6992 | 0.7563 | | 0.2066 | 6.59 | 2440 | 0.4250 | 0.6984 | 0.7891 | 0.8821 | 0.9443 | 0.7366 | 0.8550 | 0.4531 | 0.9274 | 0.8033 | 0.8036 | 0.8595 | 0.6264 | 0.7442 | 0.3773 | 0.8165 | 0.7104 | 0.7542 | | 0.3392 | 6.65 | 2460 | 0.4451 | 0.7069 | 0.8068 | 0.8826 | 0.9344 | 0.7984 | 0.8709 | 0.4868 | 0.9166 | 0.8370 | 0.8034 | 0.8621 | 0.6675 | 0.7526 | 0.3907 | 0.8134 | 0.7178 | 0.7439 | | 0.157 | 6.7 | 2480 | 0.4524 | 0.6997 | 0.8127 | 0.8789 | 0.9376 | 0.8584 | 0.8853 | 0.4565 | 0.8887 | 0.8806 | 0.7816 | 0.8628 | 0.6606 | 0.7511 | 0.3643 | 0.7975 | 0.7006 | 0.7612 | | 0.372 | 6.76 | 2500 | 0.4940 | 0.6909 | 0.7882 | 0.8735 | 0.9448 | 0.7823 | 0.8675 | 0.4468 | 0.9153 | 0.8503 | 0.7101 | 0.8591 | 0.6557 | 0.7605 | 0.3728 | 0.7878 | 0.7093 | 0.6912 | | 0.3178 | 6.81 | 2520 | 0.4903 | 0.6930 | 0.8142 | 0.8717 | 0.9423 | 0.8331 | 0.8682 | 0.5512 | 0.8773 | 0.9142 | 0.7134 | 0.8683 | 0.6637 | 0.7577 | 0.4163 | 0.7872 | 0.6649 | 0.6930 | | 0.215 | 6.86 | 2540 | 0.4637 | 0.6972 | 0.7998 | 0.8757 | 0.9532 | 0.8119 | 0.8633 | 0.4540 | 0.8857 | 0.8881 | 0.7424 | 0.8646 | 0.6725 | 0.7560 | 0.3920 | 0.7871 | 0.6898 | 0.7182 | | 0.3213 | 6.92 | 2560 | 0.4628 | 0.6959 | 0.8036 | 0.8743 | 0.9387 | 0.8257 | 0.8729 | 0.4564 | 0.8865 | 0.8796 | 0.7655 | 0.8649 | 0.6665 | 0.7610 | 0.3949 | 0.7854 | 0.6949 | 0.7040 | | 0.2527 | 6.97 | 2580 | 0.4482 | 0.7161 | 0.8130 | 0.8883 | 0.9407 | 0.7845 | 0.8738 | 0.4884 | 0.9042 | 0.8376 | 0.8616 | 0.8654 | 0.6612 | 0.7485 | 0.3966 | 0.8198 | 0.7400 | 0.7813 | | 0.1728 | 7.03 | 2600 | 0.5057 | 0.6812 | 0.7963 | 0.8684 | 0.9445 | 0.8226 | 0.8590 | 0.4278 | 0.8697 | 0.9191 | 0.7313 | 0.8649 | 0.6499 | 0.7537 | 0.3764 | 0.7808 | 0.6370 | 0.7060 | | 0.2091 | 7.08 | 2620 | 0.4663 | 0.6976 | 0.7985 | 0.8776 | 0.9460 | 0.8091 | 0.8621 | 0.4714 | 0.9128 | 0.8510 | 0.7370 | 0.8651 | 0.6651 | 0.7630 | 0.3829 | 0.7992 | 0.7020 | 0.7062 | | 0.2555 | 7.14 | 2640 | 0.4451 | 0.6965 | 0.8166 | 0.8746 | 0.9317 | 0.8049 | 0.8471 | 0.5562 | 0.8739 | 0.8891 | 0.8131 | 0.8724 | 0.6473 | 0.7589 | 0.3731 | 0.7762 | 0.6921 | 0.7555 | | 0.0762 | 7.19 | 2660 | 0.5804 | 0.6913 | 0.8124 | 0.8693 | 0.8979 | 0.8372 | 0.8690 | 0.4670 | 0.8733 | 0.8620 | 0.8803 | 0.8319 | 0.6566 | 0.7501 | 0.3740 | 0.7960 | 0.6973 | 0.7335 | | 0.2356 | 7.24 | 2680 | 0.5501 | 0.6870 | 0.8061 | 0.8653 | 0.9067 | 0.8089 | 0.8826 | 0.5069 | 0.8836 | 0.8740 | 0.7798 | 0.8436 | 0.6681 | 0.7500 | 0.3888 | 0.7794 | 0.6888 | 0.6903 | | 0.229 | 7.3 | 2700 | 0.4246 | 0.7151 | 0.8166 | 0.8857 | 0.9342 | 0.8390 | 0.8546 | 0.4949 | 0.9131 | 0.8574 | 0.8230 | 0.8709 | 0.6682 | 0.7535 | 0.4107 | 0.8030 | 0.7215 | 0.7781 | | 0.287 | 7.35 | 2720 | 0.4747 | 0.6925 | 0.7956 | 0.8747 | 0.9487 | 0.7590 | 0.8601 | 0.5028 | 0.9007 | 0.8641 | 0.7335 | 0.8688 | 0.6575 | 0.7474 | 0.3801 | 0.7887 | 0.6922 | 0.7126 | | 0.1907 | 7.41 | 2740 | 0.4474 | 0.7047 | 0.8106 | 0.8794 | 0.9501 | 0.8078 | 0.8484 | 0.5106 | 0.8832 | 0.8878 | 0.7866 | 0.8644 | 0.6592 | 0.7515 | 0.4136 | 0.7961 | 0.6971 | 0.7511 | | 0.1433 | 7.46 | 2760 | 0.4735 | 0.6826 | 0.7862 | 0.8733 | 0.9403 | 0.7459 | 0.8884 | 0.4500 | 0.9126 | 0.8334 | 0.7329 | 0.8710 | 0.6268 | 0.7086 | 0.3855 | 0.7979 | 0.6869 | 0.7014 | | 0.2798 | 7.51 | 2780 | 0.4645 | 0.6950 | 0.8059 | 0.8744 | 0.9293 | 0.7871 | 0.8788 | 0.5225 | 0.9041 | 0.8737 | 0.7459 | 0.8671 | 0.6606 | 0.7271 | 0.4156 | 0.7951 | 0.6873 | 0.7119 | | 0.2179 | 7.57 | 2800 | 0.4856 | 0.6790 | 0.8021 | 0.8667 | 0.9389 | 0.7727 | 0.8977 | 0.5403 | 0.8740 | 0.8819 | 0.7091 | 0.8642 | 0.6515 | 0.7139 | 0.3837 | 0.7894 | 0.6674 | 0.6829 | | 0.2489 | 7.62 | 2820 | 0.4813 | 0.6997 | 0.7979 | 0.8781 | 0.9422 | 0.8197 | 0.8659 | 0.4907 | 0.9300 | 0.8034 | 0.7334 | 0.8636 | 0.6674 | 0.7461 | 0.4032 | 0.8033 | 0.7118 | 0.7026 | | 0.2088 | 7.68 | 2840 | 0.4967 | 0.6967 | 0.7868 | 0.8772 | 0.9448 | 0.7841 | 0.8710 | 0.4438 | 0.9314 | 0.7981 | 0.7347 | 0.8633 | 0.6679 | 0.7616 | 0.3802 | 0.7950 | 0.7054 | 0.7036 | | 1.4943 | 7.73 | 2860 | 0.4650 | 0.7076 | 0.8172 | 0.8822 | 0.9453 | 0.8504 | 0.8745 | 0.5019 | 0.8875 | 0.8525 | 0.8084 | 0.8626 | 0.6716 | 0.7564 | 0.3858 | 0.8073 | 0.6949 | 0.7744 | | 0.1842 | 7.78 | 2880 | 0.4890 | 0.6962 | 0.8026 | 0.8771 | 0.9433 | 0.8371 | 0.8870 | 0.4157 | 0.8845 | 0.8691 | 0.7817 | 0.8619 | 0.6745 | 0.7458 | 0.3593 | 0.7933 | 0.6917 | 0.7467 | | 0.1736 | 7.84 | 2900 | 0.5137 | 0.6846 | 0.7997 | 0.8686 | 0.9294 | 0.7906 | 0.8846 | 0.5005 | 0.8908 | 0.8688 | 0.7333 | 0.8614 | 0.6727 | 0.7363 | 0.3548 | 0.7818 | 0.6848 | 0.7007 | | 0.0741 | 7.89 | 2920 | 0.4894 | 0.6865 | 0.8048 | 0.8674 | 0.9449 | 0.7941 | 0.8550 | 0.5675 | 0.8726 | 0.8752 | 0.7243 | 0.8600 | 0.6575 | 0.7549 | 0.3885 | 0.7768 | 0.6912 | 0.6768 | | 1.0211 | 7.95 | 2940 | 0.5253 | 0.6995 | 0.8028 | 0.8746 | 0.9249 | 0.8094 | 0.8542 | 0.5138 | 0.9180 | 0.8269 | 0.7723 | 0.8553 | 0.6635 | 0.7638 | 0.3970 | 0.7927 | 0.7299 | 0.6947 | | 0.1533 | 8.0 | 2960 | 0.5440 | 0.6866 | 0.7910 | 0.8704 | 0.9378 | 0.8262 | 0.8675 | 0.4365 | 0.9073 | 0.8407 | 0.7210 | 0.8520 | 0.6734 | 0.7407 | 0.3410 | 0.7870 | 0.7222 | 0.6895 | | 0.1831 | 8.05 | 2980 | 0.5731 | 0.6853 | 0.7804 | 0.8715 | 0.9367 | 0.8117 | 0.8719 | 0.3778 | 0.9262 | 0.8276 | 0.7109 | 0.8510 | 0.6730 | 0.7480 | 0.3358 | 0.7907 | 0.7154 | 0.6831 | | 0.3096 | 8.11 | 3000 | 0.5337 | 0.6960 | 0.8019 | 0.8745 | 0.9299 | 0.8336 | 0.8758 | 0.4576 | 0.9075 | 0.8535 | 0.7552 | 0.8623 | 0.6744 | 0.7461 | 0.3805 | 0.7856 | 0.7046 | 0.7188 | | 0.3453 | 8.16 | 3020 | 0.5051 | 0.6937 | 0.8006 | 0.8764 | 0.9358 | 0.8207 | 0.8646 | 0.4590 | 0.9096 | 0.8591 | 0.7555 | 0.8666 | 0.6735 | 0.7450 | 0.3479 | 0.7970 | 0.7071 | 0.7190 | | 0.428 | 8.22 | 3040 | 0.5491 | 0.6820 | 0.7859 | 0.8701 | 0.9395 | 0.7986 | 0.8750 | 0.4329 | 0.9168 | 0.8531 | 0.6857 | 0.8624 | 0.6737 | 0.7670 | 0.3504 | 0.7928 | 0.6666 | 0.6612 | | 0.1888 | 8.27 | 3060 | 0.4946 | 0.6962 | 0.8192 | 0.8740 | 0.9356 | 0.8465 | 0.8509 | 0.5832 | 0.8872 | 0.8772 | 0.7539 | 0.8663 | 0.6706 | 0.7675 | 0.3718 | 0.7924 | 0.6868 | 0.7177 | | 0.2311 | 8.32 | 3080 | 0.5046 | 0.6989 | 0.7946 | 0.8786 | 0.9458 | 0.8213 | 0.8475 | 0.4273 | 0.9177 | 0.8529 | 0.7496 | 0.8630 | 0.6715 | 0.7560 | 0.3647 | 0.7967 | 0.7241 | 0.7164 | | 0.1286 | 8.38 | 3100 | 0.4906 | 0.7030 | 0.8158 | 0.8783 | 0.9350 | 0.8164 | 0.8721 | 0.5063 | 0.8831 | 0.9105 | 0.7871 | 0.8723 | 0.6683 | 0.7490 | 0.3971 | 0.7875 | 0.6949 | 0.7517 | | 0.1767 | 8.43 | 3120 | 0.4789 | 0.7067 | 0.8089 | 0.8809 | 0.9405 | 0.8015 | 0.8749 | 0.4939 | 0.9045 | 0.8813 | 0.7658 | 0.8747 | 0.6714 | 0.7694 | 0.4011 | 0.7926 | 0.7091 | 0.7283 | | 0.5431 | 8.49 | 3140 | 0.4786 | 0.7144 | 0.8228 | 0.8871 | 0.9264 | 0.8328 | 0.8910 | 0.4969 | 0.9077 | 0.8680 | 0.8366 | 0.8681 | 0.6637 | 0.7535 | 0.3921 | 0.8163 | 0.7153 | 0.7920 | | 0.2099 | 8.54 | 3160 | 0.5180 | 0.6947 | 0.8019 | 0.8811 | 0.9422 | 0.7051 | 0.8342 | 0.5563 | 0.9031 | 0.8332 | 0.8388 | 0.8704 | 0.5818 | 0.7568 | 0.3983 | 0.8201 | 0.7068 | 0.7289 | | 0.1189 | 8.59 | 3180 | 0.5339 | 0.6887 | 0.7862 | 0.8776 | 0.9371 | 0.6831 | 0.8428 | 0.5067 | 0.9287 | 0.8210 | 0.7839 | 0.8641 | 0.5771 | 0.7609 | 0.4034 | 0.8181 | 0.7125 | 0.6848 | | 0.1711 | 8.65 | 3200 | 0.4568 | 0.7076 | 0.8186 | 0.8815 | 0.9402 | 0.8317 | 0.8801 | 0.5157 | 0.8889 | 0.8824 | 0.7913 | 0.8634 | 0.6597 | 0.7560 | 0.4045 | 0.8055 | 0.7136 | 0.7504 | | 0.4557 | 8.7 | 3220 | 0.4799 | 0.7026 | 0.8016 | 0.8804 | 0.9429 | 0.7953 | 0.8625 | 0.5141 | 0.9288 | 0.8274 | 0.7403 | 0.8661 | 0.6632 | 0.7647 | 0.3778 | 0.8067 | 0.7320 | 0.7080 | | 0.0809 | 8.76 | 3240 | 0.4545 | 0.7083 | 0.8234 | 0.8828 | 0.9278 | 0.8408 | 0.8498 | 0.5388 | 0.8994 | 0.8765 | 0.8307 | 0.8632 | 0.6602 | 0.7456 | 0.3742 | 0.8094 | 0.7143 | 0.7912 | | 0.3028 | 8.81 | 3260 | 0.4539 | 0.7080 | 0.8173 | 0.8816 | 0.9341 | 0.8255 | 0.8772 | 0.4574 | 0.8749 | 0.9142 | 0.8375 | 0.8643 | 0.6664 | 0.7585 | 0.3924 | 0.7956 | 0.6847 | 0.7942 | | 0.1655 | 8.86 | 3280 | 0.4907 | 0.6963 | 0.8062 | 0.8735 | 0.9288 | 0.8218 | 0.8937 | 0.4723 | 0.8922 | 0.8776 | 0.7567 | 0.8547 | 0.6615 | 0.7491 | 0.3940 | 0.7861 | 0.7087 | 0.7202 | | 0.1474 | 8.92 | 3300 | 0.4925 | 0.6946 | 0.7979 | 0.8742 | 0.9385 | 0.7808 | 0.8937 | 0.4789 | 0.8996 | 0.8447 | 0.7491 | 0.8613 | 0.6665 | 0.7515 | 0.3747 | 0.7863 | 0.7087 | 0.7131 | | 0.2676 | 8.97 | 3320 | 0.4975 | 0.6919 | 0.7928 | 0.8743 | 0.9380 | 0.7615 | 0.8646 | 0.4503 | 0.9011 | 0.8725 | 0.7616 | 0.8644 | 0.6462 | 0.7436 | 0.3796 | 0.7843 | 0.7028 | 0.7222 | | 0.2192 | 9.03 | 3340 | 0.4962 | 0.6924 | 0.8058 | 0.8731 | 0.9363 | 0.8380 | 0.8852 | 0.4848 | 0.8976 | 0.8827 | 0.7157 | 0.8631 | 0.6732 | 0.7456 | 0.3940 | 0.7961 | 0.6920 | 0.6828 | | 0.129 | 9.08 | 3360 | 0.5087 | 0.6918 | 0.8104 | 0.8718 | 0.9374 | 0.8291 | 0.8529 | 0.5362 | 0.8866 | 0.9010 | 0.7299 | 0.8701 | 0.6697 | 0.7518 | 0.4011 | 0.7874 | 0.6680 | 0.6942 | | 0.3131 | 9.14 | 3380 | 0.5127 | 0.6957 | 0.8021 | 0.8755 | 0.9415 | 0.8215 | 0.8676 | 0.4862 | 0.9016 | 0.8421 | 0.7542 | 0.8712 | 0.6718 | 0.7484 | 0.3893 | 0.7890 | 0.6962 | 0.7042 | | 0.1653 | 9.19 | 3400 | 0.5100 | 0.6953 | 0.8066 | 0.8742 | 0.9307 | 0.8402 | 0.8666 | 0.4884 | 0.9027 | 0.8658 | 0.7519 | 0.8632 | 0.6682 | 0.7573 | 0.3721 | 0.7861 | 0.7025 | 0.7180 | | 0.1177 | 9.24 | 3420 | 0.5000 | 0.6961 | 0.8144 | 0.8721 | 0.9385 | 0.8095 | 0.8727 | 0.5329 | 0.8629 | 0.9219 | 0.7625 | 0.8683 | 0.6730 | 0.7652 | 0.4048 | 0.7772 | 0.6588 | 0.7252 | | 0.2833 | 9.3 | 3440 | 0.5003 | 0.6958 | 0.7989 | 0.8763 | 0.9486 | 0.7977 | 0.8871 | 0.4846 | 0.9005 | 0.8292 | 0.7448 | 0.8662 | 0.6713 | 0.7553 | 0.3793 | 0.7951 | 0.6911 | 0.7122 | | 0.1196 | 9.35 | 3460 | 0.4964 | 0.7008 | 0.8040 | 0.8783 | 0.9370 | 0.8310 | 0.8412 | 0.4814 | 0.9168 | 0.8609 | 0.7596 | 0.8665 | 0.6737 | 0.7514 | 0.3896 | 0.7974 | 0.7002 | 0.7270 | | 0.1224 | 9.41 | 3480 | 0.4281 | 0.7204 | 0.8176 | 0.8908 | 0.9484 | 0.7883 | 0.8413 | 0.5106 | 0.9017 | 0.8722 | 0.8605 | 0.8751 | 0.6751 | 0.7600 | 0.3813 | 0.8148 | 0.7261 | 0.8109 | | 0.1925 | 9.46 | 3500 | 0.4850 | 0.7002 | 0.8030 | 0.8784 | 0.9448 | 0.7955 | 0.8373 | 0.4988 | 0.9062 | 0.8798 | 0.7583 | 0.8721 | 0.6740 | 0.7508 | 0.3798 | 0.7936 | 0.7082 | 0.7229 | | 0.142 | 9.51 | 3520 | 0.5039 | 0.6978 | 0.8018 | 0.8774 | 0.9440 | 0.7854 | 0.8683 | 0.4771 | 0.8981 | 0.8884 | 0.7512 | 0.8694 | 0.6588 | 0.7567 | 0.3832 | 0.7916 | 0.7074 | 0.7178 | | 0.5374 | 9.57 | 3540 | 0.5199 | 0.6821 | 0.7807 | 0.8729 | 0.9427 | 0.8054 | 0.8801 | 0.3721 | 0.9214 | 0.8339 | 0.7094 | 0.8598 | 0.6709 | 0.7452 | 0.3113 | 0.7986 | 0.7131 | 0.6757 | | 0.2482 | 9.62 | 3560 | 0.4652 | 0.7054 | 0.8188 | 0.8799 | 0.9334 | 0.8055 | 0.8482 | 0.5406 | 0.8845 | 0.8975 | 0.8222 | 0.8757 | 0.6677 | 0.7649 | 0.3855 | 0.7887 | 0.6870 | 0.7683 | | 0.3794 | 9.68 | 3580 | 0.5039 | 0.7044 | 0.8126 | 0.8771 | 0.9348 | 0.8018 | 0.8510 | 0.5782 | 0.9002 | 0.8226 | 0.7999 | 0.8747 | 0.6650 | 0.7700 | 0.4059 | 0.7765 | 0.6935 | 0.7449 | | 0.2213 | 9.73 | 3600 | 0.5238 | 0.7036 | 0.8106 | 0.8774 | 0.9399 | 0.8302 | 0.8701 | 0.5202 | 0.9010 | 0.8593 | 0.7531 | 0.8737 | 0.6708 | 0.7721 | 0.4098 | 0.7819 | 0.6973 | 0.7199 | | 0.1819 | 9.78 | 3620 | 0.5291 | 0.6917 | 0.8056 | 0.8703 | 0.9396 | 0.8039 | 0.8937 | 0.5011 | 0.8715 | 0.8905 | 0.7392 | 0.8687 | 0.6748 | 0.7572 | 0.3994 | 0.7727 | 0.6679 | 0.7012 | | 0.2002 | 9.84 | 3640 | 0.5531 | 0.6887 | 0.7981 | 0.8701 | 0.9399 | 0.8237 | 0.8655 | 0.4788 | 0.8985 | 0.8792 | 0.7009 | 0.8656 | 0.6756 | 0.7512 | 0.4131 | 0.7882 | 0.6575 | 0.6698 | | 0.2655 | 9.89 | 3660 | 0.5161 | 0.6868 | 0.7972 | 0.8697 | 0.9528 | 0.8048 | 0.8700 | 0.4986 | 0.8833 | 0.8687 | 0.7019 | 0.8657 | 0.6752 | 0.7501 | 0.4023 | 0.7894 | 0.6584 | 0.6664 | | 0.2477 | 9.95 | 3680 | 0.4836 | 0.7019 | 0.8157 | 0.8778 | 0.9377 | 0.8214 | 0.8407 | 0.5560 | 0.8883 | 0.8538 | 0.8120 | 0.8664 | 0.6647 | 0.7536 | 0.3846 | 0.7933 | 0.6922 | 0.7582 | | 0.2156 | 10.0 | 3700 | 0.5079 | 0.7042 | 0.8164 | 0.8776 | 0.9386 | 0.8435 | 0.8767 | 0.4869 | 0.8729 | 0.9000 | 0.7964 | 0.8623 | 0.6719 | 0.7603 | 0.4014 | 0.7876 | 0.6827 | 0.7632 | | 0.0994 | 10.05 | 3720 | 0.5784 | 0.6926 | 0.7874 | 0.8742 | 0.9424 | 0.7519 | 0.8636 | 0.4772 | 0.9168 | 0.8107 | 0.7495 | 0.8682 | 0.6584 | 0.7501 | 0.3895 | 0.7820 | 0.6815 | 0.7182 | | 0.1422 | 10.11 | 3740 | 0.5004 | 0.7091 | 0.8216 | 0.8814 | 0.9440 | 0.7910 | 0.8762 | 0.5485 | 0.8660 | 0.8898 | 0.8353 | 0.8682 | 0.6705 | 0.7480 | 0.3998 | 0.7958 | 0.6824 | 0.7992 | | 0.1474 | 10.16 | 3760 | 0.5008 | 0.7061 | 0.8078 | 0.8804 | 0.9431 | 0.7985 | 0.8427 | 0.5068 | 0.8977 | 0.8594 | 0.8067 | 0.8678 | 0.6611 | 0.7675 | 0.4087 | 0.7956 | 0.6944 | 0.7476 | | 0.2039 | 10.22 | 3780 | 0.5088 | 0.7041 | 0.8036 | 0.8811 | 0.9412 | 0.7889 | 0.8745 | 0.4763 | 0.9135 | 0.8749 | 0.7559 | 0.8732 | 0.6680 | 0.7655 | 0.4085 | 0.8037 | 0.6881 | 0.7214 | | 0.1457 | 10.27 | 3800 | 0.5180 | 0.7033 | 0.8146 | 0.8783 | 0.9297 | 0.8361 | 0.8904 | 0.5206 | 0.9080 | 0.8667 | 0.7508 | 0.8675 | 0.6729 | 0.7511 | 0.4191 | 0.8009 | 0.6947 | 0.7172 | | 0.1477 | 10.32 | 3820 | 0.4620 | 0.7121 | 0.8255 | 0.8826 | 0.9348 | 0.8237 | 0.8810 | 0.5393 | 0.8774 | 0.8847 | 0.8373 | 0.8675 | 0.6712 | 0.7525 | 0.4031 | 0.7975 | 0.7074 | 0.7858 | | 0.1087 | 10.38 | 3840 | 0.4719 | 0.7157 | 0.8136 | 0.8848 | 0.9431 | 0.7906 | 0.8557 | 0.5369 | 0.9109 | 0.8630 | 0.7948 | 0.8692 | 0.6764 | 0.7595 | 0.4215 | 0.8049 | 0.7220 | 0.7561 | | 0.1731 | 10.43 | 3860 | 0.5375 | 0.7051 | 0.8194 | 0.8777 | 0.9247 | 0.7931 | 0.8780 | 0.5654 | 0.8817 | 0.8487 | 0.8444 | 0.8601 | 0.6704 | 0.7528 | 0.3970 | 0.7964 | 0.7141 | 0.7448 | | 0.9982 | 10.49 | 3880 | 0.5151 | 0.7084 | 0.8219 | 0.8807 | 0.9291 | 0.8161 | 0.8619 | 0.5430 | 0.8842 | 0.8760 | 0.8427 | 0.8632 | 0.6665 | 0.7463 | 0.4091 | 0.8008 | 0.6999 | 0.7732 | | 0.12 | 10.54 | 3900 | 0.5254 | 0.7133 | 0.8260 | 0.8821 | 0.9079 | 0.7965 | 0.8753 | 0.5491 | 0.8964 | 0.8801 | 0.8766 | 0.8599 | 0.6681 | 0.7574 | 0.4149 | 0.8056 | 0.7233 | 0.7636 | | 0.133 | 10.59 | 3920 | 0.4950 | 0.7072 | 0.8103 | 0.8799 | 0.9350 | 0.8045 | 0.8757 | 0.5162 | 0.9102 | 0.8673 | 0.7634 | 0.8666 | 0.6727 | 0.7653 | 0.4189 | 0.8004 | 0.7115 | 0.7148 | | 0.1647 | 10.65 | 3940 | 0.5245 | 0.7083 | 0.8167 | 0.8809 | 0.9372 | 0.8487 | 0.8663 | 0.5098 | 0.9043 | 0.8909 | 0.7597 | 0.8698 | 0.6721 | 0.7643 | 0.4243 | 0.8019 | 0.7053 | 0.7201 | | 0.065 | 10.7 | 3960 | 0.5537 | 0.7011 | 0.8036 | 0.8774 | 0.9363 | 0.8164 | 0.8801 | 0.4670 | 0.9108 | 0.8814 | 0.7334 | 0.8625 | 0.6689 | 0.7544 | 0.4066 | 0.7976 | 0.7204 | 0.6977 | | 0.1435 | 10.76 | 3980 | 0.5455 | 0.7012 | 0.8083 | 0.8774 | 0.9263 | 0.8519 | 0.8807 | 0.4865 | 0.9244 | 0.8542 | 0.7340 | 0.8584 | 0.6684 | 0.7582 | 0.4070 | 0.8057 | 0.7154 | 0.6953 | | 0.13 | 10.81 | 4000 | 0.5234 | 0.7007 | 0.8093 | 0.8769 | 0.9446 | 0.8018 | 0.8865 | 0.5326 | 0.8979 | 0.8787 | 0.7231 | 0.8625 | 0.6785 | 0.7539 | 0.4086 | 0.8056 | 0.7043 | 0.6916 | | 0.1456 | 10.86 | 4020 | 0.5219 | 0.7034 | 0.8128 | 0.8768 | 0.9475 | 0.8208 | 0.8543 | 0.5534 | 0.8943 | 0.8907 | 0.7284 | 0.8647 | 0.6690 | 0.7711 | 0.4412 | 0.8021 | 0.6951 | 0.6809 | | 0.3105 | 10.92 | 4040 | 0.4999 | 0.7120 | 0.8296 | 0.8815 | 0.9333 | 0.8162 | 0.8473 | 0.6054 | 0.8790 | 0.8849 | 0.8409 | 0.8690 | 0.6639 | 0.7584 | 0.4195 | 0.7986 | 0.7170 | 0.7575 | | 0.5743 | 10.97 | 4060 | 0.5080 | 0.6985 | 0.8198 | 0.8729 | 0.9211 | 0.7929 | 0.8758 | 0.6033 | 0.8810 | 0.8614 | 0.8029 | 0.8644 | 0.6694 | 0.7446 | 0.4020 | 0.7849 | 0.7067 | 0.7175 | | 0.2954 | 11.03 | 4080 | 0.5080 | 0.7015 | 0.8152 | 0.8777 | 0.9313 | 0.8277 | 0.8680 | 0.5415 | 0.9046 | 0.8758 | 0.7578 | 0.8688 | 0.6716 | 0.7464 | 0.4125 | 0.8016 | 0.6860 | 0.7235 | | 0.1178 | 11.08 | 4100 | 0.5030 | 0.6969 | 0.8111 | 0.8777 | 0.9442 | 0.8384 | 0.8498 | 0.5436 | 0.9088 | 0.8641 | 0.7289 | 0.8691 | 0.6678 | 0.7462 | 0.3998 | 0.8139 | 0.6839 | 0.6976 | | 0.0749 | 11.14 | 4120 | 0.4972 | 0.7041 | 0.8051 | 0.8824 | 0.9416 | 0.8187 | 0.8661 | 0.4901 | 0.9275 | 0.8424 | 0.7494 | 0.8699 | 0.6804 | 0.7500 | 0.3903 | 0.8180 | 0.7050 | 0.7154 | | 0.1806 | 11.19 | 4140 | 0.4569 | 0.7187 | 0.8138 | 0.8919 | 0.9445 | 0.8277 | 0.8578 | 0.4420 | 0.9154 | 0.8618 | 0.8476 | 0.8677 | 0.6729 | 0.7428 | 0.3778 | 0.8314 | 0.7438 | 0.7945 | | 0.0825 | 11.24 | 4160 | 0.4618 | 0.7209 | 0.8263 | 0.8900 | 0.9348 | 0.8510 | 0.8607 | 0.5235 | 0.9125 | 0.8587 | 0.8425 | 0.8667 | 0.6719 | 0.7405 | 0.4026 | 0.8256 | 0.7454 | 0.7936 | | 0.167 | 11.3 | 4180 | 0.5092 | 0.7084 | 0.7979 | 0.8838 | 0.9347 | 0.7767 | 0.8705 | 0.4397 | 0.9360 | 0.8595 | 0.7682 | 0.8674 | 0.6710 | 0.7533 | 0.3887 | 0.8072 | 0.7399 | 0.7315 | | 0.2843 | 11.35 | 4200 | 0.4910 | 0.7010 | 0.8009 | 0.8794 | 0.9315 | 0.7853 | 0.8640 | 0.4931 | 0.9287 | 0.8426 | 0.7608 | 0.8638 | 0.6709 | 0.7455 | 0.3723 | 0.8053 | 0.7218 | 0.7270 | | 0.1661 | 11.41 | 4220 | 0.4854 | 0.7004 | 0.8171 | 0.8778 | 0.9279 | 0.8398 | 0.8774 | 0.5295 | 0.9025 | 0.8774 | 0.7649 | 0.8669 | 0.6667 | 0.7501 | 0.3953 | 0.8044 | 0.7029 | 0.7167 | | 0.177 | 11.46 | 4240 | 0.5016 | 0.7037 | 0.8149 | 0.8788 | 0.9345 | 0.8183 | 0.8682 | 0.5585 | 0.9094 | 0.8605 | 0.7550 | 0.8696 | 0.6801 | 0.7589 | 0.3955 | 0.8022 | 0.7029 | 0.7167 | | 0.2762 | 11.51 | 4260 | 0.5231 | 0.7084 | 0.8047 | 0.8815 | 0.9408 | 0.8002 | 0.8848 | 0.4725 | 0.9131 | 0.8584 | 0.7629 | 0.8670 | 0.6771 | 0.7589 | 0.4082 | 0.7999 | 0.7243 | 0.7233 | | 0.1087 | 11.57 | 4280 | 0.5379 | 0.7061 | 0.8148 | 0.8798 | 0.9270 | 0.8000 | 0.8900 | 0.5315 | 0.9111 | 0.8854 | 0.7588 | 0.8675 | 0.6656 | 0.7573 | 0.4241 | 0.8042 | 0.7035 | 0.7203 | | 0.2151 | 11.62 | 4300 | 0.5152 | 0.7013 | 0.8163 | 0.8753 | 0.9364 | 0.8048 | 0.8762 | 0.5824 | 0.8941 | 0.8891 | 0.7313 | 0.8642 | 0.6766 | 0.7707 | 0.4216 | 0.7996 | 0.6874 | 0.6887 | | 0.109 | 11.68 | 4320 | 0.5020 | 0.7061 | 0.8112 | 0.8797 | 0.9462 | 0.8171 | 0.8883 | 0.5476 | 0.9131 | 0.8507 | 0.7152 | 0.8697 | 0.6804 | 0.7691 | 0.4209 | 0.8057 | 0.7149 | 0.6817 | | 0.1346 | 11.73 | 4340 | 0.5073 | 0.6993 | 0.8117 | 0.8727 | 0.9377 | 0.8015 | 0.8631 | 0.5994 | 0.8975 | 0.8537 | 0.7288 | 0.8630 | 0.6737 | 0.7727 | 0.4217 | 0.7864 | 0.6963 | 0.6810 | | 0.1628 | 11.78 | 4360 | 0.5946 | 0.6996 | 0.8040 | 0.8743 | 0.9444 | 0.8009 | 0.8730 | 0.5260 | 0.9015 | 0.8713 | 0.7108 | 0.8636 | 0.6801 | 0.7713 | 0.4340 | 0.7941 | 0.6759 | 0.6784 | | 0.1078 | 11.84 | 4380 | 0.5358 | 0.7136 | 0.8209 | 0.8817 | 0.9332 | 0.8559 | 0.8716 | 0.5406 | 0.9118 | 0.8742 | 0.7593 | 0.8690 | 0.6697 | 0.7717 | 0.4453 | 0.7988 | 0.7219 | 0.7184 | | 0.2878 | 11.89 | 4400 | 0.5400 | 0.7119 | 0.8061 | 0.8816 | 0.9484 | 0.8077 | 0.8697 | 0.5038 | 0.9133 | 0.8506 | 0.7493 | 0.8681 | 0.6690 | 0.7695 | 0.4398 | 0.7947 | 0.7247 | 0.7178 | | 0.163 | 11.95 | 4420 | 0.4955 | 0.7137 | 0.8150 | 0.8834 | 0.9442 | 0.8292 | 0.8819 | 0.5228 | 0.9066 | 0.8425 | 0.7777 | 0.8653 | 0.6670 | 0.7683 | 0.4287 | 0.8033 | 0.7205 | 0.7428 | | 0.1191 | 12.0 | 4440 | 0.4637 | 0.7248 | 0.8333 | 0.8877 | 0.9429 | 0.8129 | 0.8597 | 0.5923 | 0.8803 | 0.8988 | 0.8463 | 0.8701 | 0.6726 | 0.7732 | 0.4433 | 0.8032 | 0.7082 | 0.8032 | | 0.2084 | 12.05 | 4460 | 0.5186 | 0.7122 | 0.8122 | 0.8814 | 0.9342 | 0.8089 | 0.8732 | 0.5363 | 0.9210 | 0.8534 | 0.7581 | 0.8728 | 0.6772 | 0.7707 | 0.4206 | 0.7911 | 0.7312 | 0.7215 | | 0.1854 | 12.11 | 4480 | 0.5230 | 0.7061 | 0.8127 | 0.8792 | 0.9359 | 0.8309 | 0.8947 | 0.5021 | 0.9033 | 0.8697 | 0.7521 | 0.8673 | 0.6797 | 0.7500 | 0.4015 | 0.7938 | 0.7296 | 0.7206 | | 0.1889 | 12.16 | 4500 | 0.4837 | 0.7133 | 0.8171 | 0.8825 | 0.9415 | 0.8475 | 0.8773 | 0.5297 | 0.9081 | 0.8471 | 0.7688 | 0.8708 | 0.6769 | 0.7718 | 0.4255 | 0.7964 | 0.7211 | 0.7310 | | 0.153 | 12.22 | 4520 | 0.4712 | 0.7156 | 0.8198 | 0.8839 | 0.9316 | 0.7986 | 0.8801 | 0.5659 | 0.9121 | 0.8510 | 0.7995 | 0.8707 | 0.6732 | 0.7598 | 0.4311 | 0.8026 | 0.7179 | 0.7541 | | 0.2379 | 12.27 | 4540 | 0.5025 | 0.7170 | 0.8201 | 0.8842 | 0.9323 | 0.8009 | 0.8691 | 0.5711 | 0.9166 | 0.8629 | 0.7875 | 0.8708 | 0.6748 | 0.7623 | 0.4388 | 0.8034 | 0.7225 | 0.7464 | | 0.1759 | 12.32 | 4560 | 0.4684 | 0.7206 | 0.8304 | 0.8866 | 0.9355 | 0.8413 | 0.8658 | 0.5680 | 0.9002 | 0.8947 | 0.8073 | 0.8724 | 0.6724 | 0.7693 | 0.4344 | 0.8064 | 0.7223 | 0.7673 | | 0.0683 | 12.38 | 4580 | 0.4606 | 0.7299 | 0.8333 | 0.8927 | 0.9351 | 0.8210 | 0.8745 | 0.5662 | 0.9084 | 0.8799 | 0.8479 | 0.8744 | 0.6810 | 0.7725 | 0.4329 | 0.8203 | 0.7247 | 0.8037 | | 0.1155 | 12.43 | 4600 | 0.4529 | 0.7329 | 0.8351 | 0.8962 | 0.9433 | 0.8439 | 0.8791 | 0.5437 | 0.9085 | 0.8782 | 0.8491 | 0.8791 | 0.6827 | 0.7695 | 0.4273 | 0.8305 | 0.7390 | 0.8022 | | 0.1194 | 12.49 | 4620 | 0.5244 | 0.7090 | 0.8053 | 0.8809 | 0.9429 | 0.8129 | 0.8577 | 0.5343 | 0.9300 | 0.8144 | 0.7449 | 0.8675 | 0.6849 | 0.7715 | 0.4072 | 0.8011 | 0.7237 | 0.7073 | | 0.2681 | 12.54 | 4640 | 0.5301 | 0.7105 | 0.8124 | 0.8828 | 0.9404 | 0.8324 | 0.8551 | 0.5272 | 0.9215 | 0.8453 | 0.7652 | 0.8713 | 0.6805 | 0.7576 | 0.3970 | 0.8032 | 0.7332 | 0.7306 | | 0.173 | 12.59 | 4660 | 0.5097 | 0.7125 | 0.8096 | 0.8844 | 0.9421 | 0.8165 | 0.8539 | 0.5362 | 0.9330 | 0.8189 | 0.7666 | 0.8741 | 0.6824 | 0.7576 | 0.4051 | 0.8070 | 0.7302 | 0.7314 | | 0.3458 | 12.65 | 4680 | 0.4965 | 0.7078 | 0.8089 | 0.8813 | 0.9473 | 0.7956 | 0.8637 | 0.5241 | 0.9076 | 0.8714 | 0.7527 | 0.8705 | 0.6782 | 0.7559 | 0.4082 | 0.8022 | 0.7203 | 0.7193 | | 0.1284 | 12.7 | 4700 | 0.4673 | 0.7267 | 0.8249 | 0.8904 | 0.9401 | 0.8143 | 0.8712 | 0.5281 | 0.9047 | 0.8795 | 0.8362 | 0.8742 | 0.6845 | 0.7703 | 0.4401 | 0.8114 | 0.7173 | 0.7891 | | 0.2377 | 12.76 | 4720 | 0.4589 | 0.7292 | 0.8209 | 0.8931 | 0.9397 | 0.7997 | 0.8709 | 0.5099 | 0.9186 | 0.8616 | 0.8461 | 0.8739 | 0.6842 | 0.7705 | 0.4338 | 0.8185 | 0.7183 | 0.8054 | | 0.1044 | 12.81 | 4740 | 0.5093 | 0.7087 | 0.8126 | 0.8812 | 0.9413 | 0.8186 | 0.8831 | 0.5200 | 0.9077 | 0.8606 | 0.7569 | 0.8702 | 0.6835 | 0.7631 | 0.3993 | 0.7989 | 0.7215 | 0.7244 | | 0.1222 | 12.86 | 4760 | 0.5286 | 0.7017 | 0.8075 | 0.8792 | 0.9358 | 0.8207 | 0.8939 | 0.4502 | 0.9008 | 0.8964 | 0.7550 | 0.8687 | 0.6875 | 0.7484 | 0.3913 | 0.8030 | 0.6905 | 0.7223 | | 0.1819 | 12.92 | 4780 | 0.5473 | 0.6993 | 0.8006 | 0.8783 | 0.9441 | 0.8377 | 0.8787 | 0.4731 | 0.9219 | 0.8221 | 0.7264 | 0.8682 | 0.6807 | 0.7669 | 0.4151 | 0.8093 | 0.6595 | 0.6955 | | 0.1422 | 12.97 | 4800 | 0.5042 | 0.7089 | 0.8038 | 0.8846 | 0.9478 | 0.8060 | 0.8793 | 0.4616 | 0.9224 | 0.8596 | 0.7501 | 0.8807 | 0.6850 | 0.7750 | 0.4121 | 0.8116 | 0.6843 | 0.7136 | | 0.1449 | 13.03 | 4820 | 0.5165 | 0.7113 | 0.8117 | 0.8838 | 0.9430 | 0.8128 | 0.8994 | 0.4908 | 0.9097 | 0.8670 | 0.7594 | 0.8753 | 0.6866 | 0.7620 | 0.4304 | 0.8104 | 0.6897 | 0.7246 | | 0.1895 | 13.08 | 4840 | 0.4997 | 0.7116 | 0.8156 | 0.8838 | 0.9402 | 0.8201 | 0.8990 | 0.5111 | 0.9079 | 0.8612 | 0.7695 | 0.8752 | 0.6873 | 0.7613 | 0.4137 | 0.8068 | 0.7013 | 0.7355 | | 0.1773 | 13.14 | 4860 | 0.4964 | 0.7134 | 0.8159 | 0.8853 | 0.9392 | 0.8523 | 0.8867 | 0.4908 | 0.9131 | 0.8315 | 0.7975 | 0.8720 | 0.6819 | 0.7579 | 0.4004 | 0.8071 | 0.7142 | 0.7601 | | 0.1133 | 13.19 | 4880 | 0.4879 | 0.7209 | 0.8143 | 0.8892 | 0.9500 | 0.8025 | 0.8596 | 0.5030 | 0.9112 | 0.8609 | 0.8127 | 0.8726 | 0.6828 | 0.7623 | 0.4216 | 0.8141 | 0.7138 | 0.7792 | | 0.3126 | 13.24 | 4900 | 0.4768 | 0.7280 | 0.8242 | 0.8943 | 0.9430 | 0.8414 | 0.8803 | 0.4860 | 0.9146 | 0.8581 | 0.8457 | 0.8785 | 0.6857 | 0.7674 | 0.4185 | 0.8228 | 0.7173 | 0.8061 | | 0.0713 | 13.3 | 4920 | 0.4557 | 0.7221 | 0.8182 | 0.8892 | 0.9449 | 0.8233 | 0.8910 | 0.5045 | 0.9109 | 0.8382 | 0.8144 | 0.8719 | 0.6798 | 0.7649 | 0.4317 | 0.8131 | 0.7138 | 0.7794 | | 0.2414 | 13.35 | 4940 | 0.5193 | 0.7164 | 0.8193 | 0.8850 | 0.9397 | 0.8366 | 0.8864 | 0.5090 | 0.9068 | 0.8800 | 0.7763 | 0.8758 | 0.6879 | 0.7636 | 0.4349 | 0.8051 | 0.7064 | 0.7408 | | 0.3379 | 13.41 | 4960 | 0.5566 | 0.7060 | 0.8104 | 0.8801 | 0.9431 | 0.7932 | 0.8922 | 0.5346 | 0.9052 | 0.8554 | 0.7491 | 0.8748 | 0.6793 | 0.7603 | 0.4312 | 0.8025 | 0.6797 | 0.7142 | | 0.1459 | 13.46 | 4980 | 0.5457 | 0.7069 | 0.8116 | 0.8791 | 0.9499 | 0.7954 | 0.8754 | 0.5687 | 0.9017 | 0.8567 | 0.7333 | 0.8681 | 0.6779 | 0.7643 | 0.4273 | 0.8002 | 0.7124 | 0.6983 | | 1.3247 | 13.51 | 5000 | 0.5213 | 0.7102 | 0.8104 | 0.8826 | 0.9481 | 0.8174 | 0.8809 | 0.5285 | 0.9128 | 0.8244 | 0.7607 | 0.8744 | 0.6857 | 0.7585 | 0.4298 | 0.8059 | 0.6946 | 0.7226 | | 0.0945 | 13.57 | 5020 | 0.5877 | 0.7072 | 0.8020 | 0.8808 | 0.9502 | 0.7836 | 0.8803 | 0.4946 | 0.9134 | 0.8600 | 0.7316 | 0.8705 | 0.6829 | 0.7679 | 0.4203 | 0.8015 | 0.7075 | 0.6995 | | 0.1065 | 13.62 | 5040 | 0.5420 | 0.7147 | 0.8126 | 0.8850 | 0.9467 | 0.8265 | 0.8727 | 0.4935 | 0.9141 | 0.8791 | 0.7556 | 0.8802 | 0.6870 | 0.7716 | 0.4257 | 0.8024 | 0.7165 | 0.7196 | | 0.226 | 13.68 | 5060 | 0.5306 | 0.7122 | 0.8149 | 0.8837 | 0.9501 | 0.8042 | 0.8731 | 0.5303 | 0.9014 | 0.8893 | 0.7562 | 0.8806 | 0.6831 | 0.7733 | 0.4246 | 0.8020 | 0.6988 | 0.7231 | | 0.2018 | 13.73 | 5080 | 0.5174 | 0.7051 | 0.8066 | 0.8804 | 0.9459 | 0.7958 | 0.8441 | 0.5164 | 0.9092 | 0.8776 | 0.7574 | 0.8729 | 0.6732 | 0.7581 | 0.4096 | 0.8001 | 0.6977 | 0.7238 | | 0.1974 | 13.78 | 5100 | 0.4997 | 0.7140 | 0.8193 | 0.8848 | 0.9456 | 0.8214 | 0.8686 | 0.5417 | 0.9056 | 0.8859 | 0.7660 | 0.8790 | 0.6791 | 0.7627 | 0.4327 | 0.8085 | 0.7099 | 0.7259 | | 0.0747 | 13.84 | 5120 | 0.5084 | 0.7104 | 0.8162 | 0.8831 | 0.9407 | 0.8148 | 0.8771 | 0.5148 | 0.9045 | 0.8993 | 0.7620 | 0.8749 | 0.6827 | 0.7573 | 0.4234 | 0.8101 | 0.7011 | 0.7234 | | 0.1863 | 13.89 | 5140 | 0.4846 | 0.7132 | 0.8131 | 0.8851 | 0.9435 | 0.8225 | 0.8763 | 0.5215 | 0.9223 | 0.8435 | 0.7626 | 0.8754 | 0.6874 | 0.7607 | 0.4122 | 0.8121 | 0.7200 | 0.7247 | | 0.1009 | 13.95 | 5160 | 0.4980 | 0.7127 | 0.8124 | 0.8849 | 0.9432 | 0.8048 | 0.8786 | 0.5179 | 0.9185 | 0.8623 | 0.7616 | 0.8754 | 0.6867 | 0.7677 | 0.4094 | 0.8115 | 0.7132 | 0.7250 | | 0.1497 | 14.0 | 5180 | 0.4876 | 0.7174 | 0.8167 | 0.8871 | 0.9474 | 0.8119 | 0.8701 | 0.5091 | 0.9042 | 0.8754 | 0.7985 | 0.8746 | 0.6890 | 0.7656 | 0.4012 | 0.8090 | 0.7248 | 0.7579 | | 0.0537 | 14.05 | 5200 | 0.5260 | 0.7166 | 0.8159 | 0.8876 | 0.9395 | 0.8478 | 0.8777 | 0.4773 | 0.9223 | 0.8665 | 0.7802 | 0.8755 | 0.6866 | 0.7588 | 0.4029 | 0.8130 | 0.7341 | 0.7456 | | 1.577 | 14.11 | 5220 | 0.5184 | 0.7170 | 0.8182 | 0.8868 | 0.9386 | 0.8285 | 0.8968 | 0.5057 | 0.9143 | 0.8580 | 0.7856 | 0.8747 | 0.6852 | 0.7578 | 0.4133 | 0.8114 | 0.7333 | 0.7433 | | 0.1555 | 14.16 | 5240 | 0.5294 | 0.7118 | 0.8167 | 0.8833 | 0.9445 | 0.8181 | 0.9006 | 0.5216 | 0.9003 | 0.8680 | 0.7638 | 0.8744 | 0.6851 | 0.7624 | 0.4156 | 0.8043 | 0.7154 | 0.7256 | | 0.1409 | 14.22 | 5260 | 0.5103 | 0.7094 | 0.8106 | 0.8825 | 0.9487 | 0.8167 | 0.8786 | 0.5029 | 0.9044 | 0.8622 | 0.7607 | 0.8762 | 0.6880 | 0.7732 | 0.4084 | 0.8036 | 0.7067 | 0.7096 | | 0.0525 | 14.27 | 5280 | 0.5201 | 0.7205 | 0.8239 | 0.8895 | 0.9428 | 0.8287 | 0.8798 | 0.5095 | 0.8975 | 0.8714 | 0.8376 | 0.8778 | 0.6814 | 0.7685 | 0.4138 | 0.8158 | 0.7183 | 0.7678 | | 0.1194 | 14.32 | 5300 | 0.5412 | 0.7159 | 0.8292 | 0.8846 | 0.9259 | 0.8145 | 0.8835 | 0.5631 | 0.8893 | 0.8789 | 0.8495 | 0.8710 | 0.6724 | 0.7624 | 0.4242 | 0.8097 | 0.7220 | 0.7498 | | 0.161 | 14.38 | 5320 | 0.4771 | 0.7223 | 0.8160 | 0.8907 | 0.9417 | 0.8029 | 0.8723 | 0.4857 | 0.9153 | 0.8615 | 0.8330 | 0.8762 | 0.6862 | 0.7692 | 0.4117 | 0.8178 | 0.7245 | 0.7704 | | 0.0956 | 14.43 | 5340 | 0.4874 | 0.7289 | 0.8257 | 0.8929 | 0.9517 | 0.8107 | 0.8780 | 0.5159 | 0.8961 | 0.8973 | 0.8304 | 0.8754 | 0.6903 | 0.7736 | 0.4321 | 0.8216 | 0.7228 | 0.7863 | | 0.2009 | 14.49 | 5360 | 0.5104 | 0.7230 | 0.8288 | 0.8895 | 0.9319 | 0.8522 | 0.8916 | 0.5216 | 0.9051 | 0.8548 | 0.8442 | 0.8736 | 0.6803 | 0.7655 | 0.4261 | 0.8161 | 0.7255 | 0.7739 | | 0.1389 | 14.54 | 5380 | 0.5389 | 0.7100 | 0.8149 | 0.8802 | 0.9451 | 0.8162 | 0.8875 | 0.5532 | 0.9051 | 0.8674 | 0.7299 | 0.8694 | 0.6863 | 0.7588 | 0.4304 | 0.7993 | 0.7244 | 0.7014 | | 0.113 | 14.59 | 5400 | 0.5308 | 0.7129 | 0.8127 | 0.8843 | 0.9455 | 0.8141 | 0.8665 | 0.4971 | 0.9079 | 0.8915 | 0.7665 | 0.8802 | 0.6843 | 0.7596 | 0.4138 | 0.7984 | 0.7216 | 0.7324 | | 0.1371 | 14.65 | 5420 | 0.5215 | 0.7049 | 0.8004 | 0.8832 | 0.9513 | 0.8307 | 0.8734 | 0.4225 | 0.9123 | 0.8422 | 0.7706 | 0.8758 | 0.6785 | 0.7591 | 0.3686 | 0.7989 | 0.7192 | 0.7343 | | 0.3141 | 14.7 | 5440 | 0.5218 | 0.7074 | 0.8086 | 0.8841 | 0.9417 | 0.8431 | 0.8839 | 0.4166 | 0.9078 | 0.8989 | 0.7681 | 0.8791 | 0.6841 | 0.7610 | 0.3773 | 0.8015 | 0.7162 | 0.7323 | | 0.1709 | 14.76 | 5460 | 0.4785 | 0.7180 | 0.8145 | 0.8884 | 0.9507 | 0.8185 | 0.8664 | 0.4890 | 0.9110 | 0.8789 | 0.7870 | 0.8803 | 0.6828 | 0.7678 | 0.4081 | 0.8093 | 0.7288 | 0.7491 | | 0.1482 | 14.81 | 5480 | 0.4938 | 0.7181 | 0.8246 | 0.8870 | 0.9412 | 0.8479 | 0.8573 | 0.5339 | 0.9079 | 0.8986 | 0.7853 | 0.8802 | 0.6792 | 0.7679 | 0.4275 | 0.8084 | 0.7177 | 0.7458 | | 0.1614 | 14.86 | 5500 | 0.4950 | 0.7201 | 0.8244 | 0.8881 | 0.9428 | 0.8397 | 0.8713 | 0.5426 | 0.9109 | 0.8743 | 0.7891 | 0.8791 | 0.6815 | 0.7697 | 0.4200 | 0.8099 | 0.7285 | 0.7520 | | 0.9558 | 14.92 | 5520 | 0.4832 | 0.7250 | 0.8235 | 0.8908 | 0.9528 | 0.8248 | 0.8712 | 0.5252 | 0.9005 | 0.8763 | 0.8140 | 0.8778 | 0.6849 | 0.7681 | 0.4287 | 0.8141 | 0.7253 | 0.7760 | | 0.1639 | 14.97 | 5540 | 0.4675 | 0.7237 | 0.8223 | 0.8900 | 0.9493 | 0.8379 | 0.8569 | 0.5267 | 0.9136 | 0.8845 | 0.7875 | 0.8788 | 0.6820 | 0.7721 | 0.4330 | 0.8138 | 0.7322 | 0.7539 | | 0.2244 | 15.03 | 5560 | 0.4465 | 0.7242 | 0.8188 | 0.8913 | 0.9494 | 0.8069 | 0.8736 | 0.5314 | 0.9218 | 0.8570 | 0.7915 | 0.8778 | 0.6772 | 0.7719 | 0.4264 | 0.8195 | 0.7391 | 0.7573 | | 0.1792 | 15.08 | 5580 | 0.4587 | 0.7288 | 0.8168 | 0.8937 | 0.9528 | 0.7940 | 0.8580 | 0.5126 | 0.9221 | 0.8627 | 0.8152 | 0.8767 | 0.6833 | 0.7738 | 0.4250 | 0.8207 | 0.7437 | 0.7785 | | 0.1632 | 15.14 | 5600 | 0.4949 | 0.7247 | 0.8112 | 0.8912 | 0.9464 | 0.8191 | 0.8799 | 0.4828 | 0.9362 | 0.8244 | 0.7895 | 0.8750 | 0.6915 | 0.7770 | 0.4215 | 0.8169 | 0.7372 | 0.7541 | | 0.1682 | 15.19 | 5620 | 0.4614 | 0.7352 | 0.8293 | 0.8963 | 0.9456 | 0.8247 | 0.8888 | 0.5385 | 0.9194 | 0.8586 | 0.8294 | 0.8743 | 0.6888 | 0.7706 | 0.4456 | 0.8324 | 0.7414 | 0.7936 | | 0.0872 | 15.24 | 5640 | 0.4925 | 0.7254 | 0.8245 | 0.8910 | 0.9493 | 0.8148 | 0.8935 | 0.5297 | 0.8996 | 0.8640 | 0.8208 | 0.8722 | 0.6820 | 0.7673 | 0.4316 | 0.8197 | 0.7191 | 0.7859 | | 0.1775 | 15.3 | 5660 | 0.5081 | 0.7243 | 0.8243 | 0.8899 | 0.9489 | 0.8139 | 0.8769 | 0.5402 | 0.8986 | 0.8709 | 0.8208 | 0.8734 | 0.6832 | 0.7708 | 0.4370 | 0.8161 | 0.7028 | 0.7869 | | 0.0841 | 15.35 | 5680 | 0.5488 | 0.7109 | 0.8155 | 0.8808 | 0.9515 | 0.8092 | 0.8628 | 0.5433 | 0.8878 | 0.8921 | 0.7621 | 0.8709 | 0.6787 | 0.7674 | 0.4393 | 0.7958 | 0.6944 | 0.7297 | | 0.1122 | 15.41 | 5700 | 0.5543 | 0.7054 | 0.8061 | 0.8805 | 0.9511 | 0.8116 | 0.8649 | 0.5180 | 0.9127 | 0.8522 | 0.7325 | 0.8724 | 0.6796 | 0.7698 | 0.4223 | 0.8059 | 0.6868 | 0.7009 | | 0.1993 | 15.46 | 5720 | 0.5177 | 0.7158 | 0.8148 | 0.8845 | 0.9460 | 0.8084 | 0.8672 | 0.5195 | 0.9054 | 0.8827 | 0.7748 | 0.8746 | 0.6815 | 0.7720 | 0.4406 | 0.8033 | 0.6999 | 0.7390 | | 0.0915 | 15.51 | 5740 | 0.4926 | 0.7269 | 0.8195 | 0.8926 | 0.9543 | 0.8048 | 0.8721 | 0.5067 | 0.9070 | 0.8705 | 0.8209 | 0.8722 | 0.6812 | 0.7736 | 0.4339 | 0.8245 | 0.7194 | 0.7840 | | 0.1622 | 15.57 | 5760 | 0.4791 | 0.7249 | 0.8201 | 0.8926 | 0.9443 | 0.8324 | 0.8663 | 0.4911 | 0.9170 | 0.8504 | 0.8394 | 0.8724 | 0.6812 | 0.7699 | 0.4131 | 0.8251 | 0.7191 | 0.7935 | | 0.0614 | 15.62 | 5780 | 0.4569 | 0.7331 | 0.8331 | 0.8950 | 0.9436 | 0.8198 | 0.8807 | 0.5573 | 0.9056 | 0.8760 | 0.8489 | 0.8775 | 0.6837 | 0.7662 | 0.4476 | 0.8260 | 0.7218 | 0.8086 | | 0.1372 | 15.68 | 5800 | 0.5126 | 0.7160 | 0.8237 | 0.8824 | 0.9411 | 0.8218 | 0.8873 | 0.5416 | 0.8821 | 0.8988 | 0.7929 | 0.8731 | 0.6832 | 0.7698 | 0.4406 | 0.7900 | 0.6973 | 0.7580 | | 0.1055 | 15.73 | 5820 | 0.5358 | 0.7106 | 0.8066 | 0.8831 | 0.9504 | 0.7913 | 0.8835 | 0.4953 | 0.9073 | 0.8540 | 0.7643 | 0.8777 | 0.6850 | 0.7731 | 0.4257 | 0.8016 | 0.6824 | 0.7286 | | 0.1022 | 15.78 | 5840 | 0.5055 | 0.7157 | 0.8227 | 0.8847 | 0.9451 | 0.8186 | 0.8651 | 0.5727 | 0.8983 | 0.8611 | 0.7978 | 0.8791 | 0.6862 | 0.7720 | 0.4170 | 0.8008 | 0.6971 | 0.7577 | | 0.1283 | 15.84 | 5860 | 0.4868 | 0.7249 | 0.8213 | 0.8912 | 0.9564 | 0.7839 | 0.8817 | 0.5548 | 0.8999 | 0.8506 | 0.8218 | 0.8767 | 0.6824 | 0.7717 | 0.4232 | 0.8163 | 0.7153 | 0.7886 | | 0.1489 | 15.89 | 5880 | 0.4910 | 0.7252 | 0.8211 | 0.8910 | 0.9491 | 0.8177 | 0.8487 | 0.5199 | 0.9098 | 0.8839 | 0.8183 | 0.8814 | 0.6875 | 0.7682 | 0.4325 | 0.8119 | 0.7123 | 0.7822 | | 0.2806 | 15.95 | 5900 | 0.4945 | 0.7297 | 0.8298 | 0.8933 | 0.9473 | 0.8097 | 0.8824 | 0.5494 | 0.8988 | 0.8796 | 0.8414 | 0.8799 | 0.6878 | 0.7684 | 0.4345 | 0.8182 | 0.7140 | 0.8054 | | 0.1371 | 16.0 | 5920 | 0.5251 | 0.7197 | 0.8139 | 0.8887 | 0.9426 | 0.8010 | 0.8877 | 0.4943 | 0.9202 | 0.8620 | 0.7895 | 0.8796 | 0.6862 | 0.7629 | 0.4191 | 0.8097 | 0.7264 | 0.7541 | | 0.0901 | 16.05 | 5940 | 0.4934 | 0.7210 | 0.8157 | 0.8892 | 0.9472 | 0.8042 | 0.8829 | 0.5010 | 0.9141 | 0.8691 | 0.7911 | 0.8806 | 0.6862 | 0.7677 | 0.4226 | 0.8096 | 0.7254 | 0.7548 | | 0.1659 | 16.11 | 5960 | 0.5388 | 0.7135 | 0.8112 | 0.8853 | 0.9461 | 0.8085 | 0.8836 | 0.4980 | 0.9167 | 0.8690 | 0.7566 | 0.8772 | 0.6831 | 0.7642 | 0.4175 | 0.8082 | 0.7187 | 0.7255 | | 0.1072 | 16.16 | 5980 | 0.5588 | 0.7167 | 0.8174 | 0.8839 | 0.9342 | 0.8372 | 0.8710 | 0.5212 | 0.9190 | 0.8699 | 0.7694 | 0.8747 | 0.6831 | 0.7671 | 0.4372 | 0.7971 | 0.7244 | 0.7334 | | 0.6106 | 16.22 | 6000 | 0.5587 | 0.7181 | 0.8198 | 0.8845 | 0.9434 | 0.8296 | 0.8722 | 0.5424 | 0.9078 | 0.8786 | 0.7648 | 0.8786 | 0.6844 | 0.7671 | 0.4477 | 0.7972 | 0.7247 | 0.7266 | | 0.1578 | 16.27 | 6020 | 0.5023 | 0.7234 | 0.8220 | 0.8891 | 0.9409 | 0.8351 | 0.8720 | 0.5270 | 0.9195 | 0.8711 | 0.7885 | 0.8785 | 0.6842 | 0.7670 | 0.4469 | 0.8123 | 0.7247 | 0.7502 | | 0.0939 | 16.32 | 6040 | 0.5277 | 0.7183 | 0.8133 | 0.8870 | 0.9397 | 0.8046 | 0.8750 | 0.5381 | 0.9354 | 0.8350 | 0.7648 | 0.8789 | 0.6819 | 0.7673 | 0.4435 | 0.8116 | 0.7149 | 0.7299 | | 0.0552 | 16.38 | 6060 | 0.5900 | 0.7183 | 0.8119 | 0.8867 | 0.9528 | 0.7929 | 0.8738 | 0.5445 | 0.9230 | 0.8467 | 0.7495 | 0.8734 | 0.6803 | 0.7708 | 0.4422 | 0.8140 | 0.7257 | 0.7216 | | 0.156 | 16.43 | 6080 | 0.5169 | 0.7219 | 0.8231 | 0.8887 | 0.9471 | 0.8251 | 0.8793 | 0.5582 | 0.9132 | 0.8552 | 0.7836 | 0.8762 | 0.6823 | 0.7677 | 0.4377 | 0.8157 | 0.7242 | 0.7494 | | 0.1466 | 16.49 | 6100 | 0.5293 | 0.7191 | 0.8131 | 0.8874 | 0.9439 | 0.7994 | 0.8523 | 0.5236 | 0.9240 | 0.8622 | 0.7863 | 0.8729 | 0.6835 | 0.7682 | 0.4200 | 0.8108 | 0.7252 | 0.7531 | | 0.0852 | 16.54 | 6120 | 0.5317 | 0.7181 | 0.8130 | 0.8870 | 0.9379 | 0.8027 | 0.8648 | 0.5091 | 0.9258 | 0.8599 | 0.7908 | 0.8729 | 0.6826 | 0.7639 | 0.4151 | 0.8087 | 0.7277 | 0.7560 | | 0.4519 | 16.59 | 6140 | 0.5177 | 0.7214 | 0.8213 | 0.8892 | 0.9425 | 0.8270 | 0.8827 | 0.5092 | 0.9116 | 0.8816 | 0.7944 | 0.8768 | 0.6819 | 0.7646 | 0.4260 | 0.8139 | 0.7273 | 0.7592 | | 0.0944 | 16.65 | 6160 | 0.5116 | 0.7197 | 0.8169 | 0.8884 | 0.9431 | 0.8113 | 0.8838 | 0.5071 | 0.9154 | 0.8653 | 0.7923 | 0.8761 | 0.6829 | 0.7623 | 0.4193 | 0.8116 | 0.7286 | 0.7574 | | 0.0866 | 16.7 | 6180 | 0.5518 | 0.7121 | 0.8173 | 0.8824 | 0.9349 | 0.8215 | 0.8803 | 0.5341 | 0.9130 | 0.8797 | 0.7574 | 0.8735 | 0.6830 | 0.7632 | 0.4301 | 0.8022 | 0.7068 | 0.7263 | | 0.1709 | 16.76 | 6200 | 0.5418 | 0.7124 | 0.8147 | 0.8834 | 0.9373 | 0.8074 | 0.8946 | 0.5317 | 0.9159 | 0.8523 | 0.7637 | 0.8738 | 0.6838 | 0.7671 | 0.4277 | 0.8068 | 0.6988 | 0.7286 | | 0.1937 | 16.81 | 6220 | 0.5486 | 0.7134 | 0.8134 | 0.8843 | 0.9376 | 0.8241 | 0.8861 | 0.5029 | 0.9209 | 0.8619 | 0.7600 | 0.8735 | 0.6849 | 0.7700 | 0.4306 | 0.8101 | 0.6983 | 0.7264 | | 2.8929 | 16.86 | 6240 | 0.5439 | 0.7128 | 0.8114 | 0.8842 | 0.9407 | 0.8229 | 0.8945 | 0.4935 | 0.9189 | 0.8496 | 0.7597 | 0.8731 | 0.6855 | 0.7738 | 0.4234 | 0.8083 | 0.7000 | 0.7255 | | 0.0648 | 16.92 | 6260 | 0.4885 | 0.7168 | 0.8199 | 0.8846 | 0.9396 | 0.8333 | 0.8732 | 0.5511 | 0.9166 | 0.8625 | 0.7629 | 0.8735 | 0.6843 | 0.7725 | 0.4493 | 0.8092 | 0.7019 | 0.7271 | | 0.1542 | 16.97 | 6280 | 0.5452 | 0.7158 | 0.8174 | 0.8834 | 0.9369 | 0.8164 | 0.8900 | 0.5580 | 0.9196 | 0.8416 | 0.7590 | 0.8737 | 0.6874 | 0.7707 | 0.4474 | 0.8033 | 0.7025 | 0.7255 | | 0.0902 | 17.03 | 6300 | 0.5273 | 0.7207 | 0.8175 | 0.8859 | 0.9384 | 0.8322 | 0.8773 | 0.5288 | 0.9223 | 0.8490 | 0.7743 | 0.8735 | 0.6886 | 0.7738 | 0.4484 | 0.8033 | 0.7189 | 0.7383 | | 0.1217 | 17.08 | 6320 | 0.4770 | 0.7303 | 0.8247 | 0.8930 | 0.9520 | 0.8060 | 0.8844 | 0.5436 | 0.9072 | 0.8574 | 0.8222 | 0.8776 | 0.6872 | 0.7754 | 0.4432 | 0.8183 | 0.7297 | 0.7811 | | 0.0718 | 17.14 | 6340 | 0.4914 | 0.7325 | 0.8319 | 0.8942 | 0.9450 | 0.8564 | 0.8866 | 0.5467 | 0.9128 | 0.8454 | 0.8305 | 0.8794 | 0.6865 | 0.7745 | 0.4518 | 0.8209 | 0.7239 | 0.7906 | | 0.2373 | 17.19 | 6360 | 0.5114 | 0.7229 | 0.8156 | 0.8895 | 0.9504 | 0.8186 | 0.8944 | 0.4887 | 0.9133 | 0.8611 | 0.7828 | 0.8782 | 0.6909 | 0.7748 | 0.4303 | 0.8105 | 0.7273 | 0.7484 | | 0.7583 | 17.24 | 6380 | 0.5510 | 0.7158 | 0.8129 | 0.8861 | 0.9501 | 0.8350 | 0.8750 | 0.4921 | 0.9151 | 0.8634 | 0.7598 | 0.8797 | 0.6886 | 0.7707 | 0.4283 | 0.8077 | 0.7103 | 0.7253 | | 0.1402 | 17.3 | 6400 | 0.5501 | 0.7149 | 0.8142 | 0.8850 | 0.9456 | 0.8124 | 0.8784 | 0.5115 | 0.9104 | 0.8760 | 0.7648 | 0.8792 | 0.6885 | 0.7677 | 0.4378 | 0.8073 | 0.6987 | 0.7254 | | 0.0813 | 17.35 | 6420 | 0.5124 | 0.7195 | 0.8216 | 0.8861 | 0.9435 | 0.8357 | 0.8774 | 0.5315 | 0.9048 | 0.8732 | 0.7853 | 0.8788 | 0.6874 | 0.7696 | 0.4431 | 0.8025 | 0.7107 | 0.7446 | | 0.1085 | 17.41 | 6440 | 0.4813 | 0.7305 | 0.8236 | 0.8939 | 0.9471 | 0.8276 | 0.8798 | 0.5244 | 0.9222 | 0.8503 | 0.8138 | 0.8796 | 0.6875 | 0.7734 | 0.4411 | 0.8217 | 0.7377 | 0.7727 | | 0.1439 | 17.46 | 6460 | 0.5327 | 0.7248 | 0.8146 | 0.8911 | 0.9400 | 0.8298 | 0.8716 | 0.4727 | 0.9351 | 0.8639 | 0.7888 | 0.8763 | 0.6880 | 0.7723 | 0.4196 | 0.8154 | 0.7514 | 0.7505 | | 0.0652 | 17.51 | 6480 | 0.4739 | 0.7331 | 0.8298 | 0.8935 | 0.9517 | 0.8073 | 0.8871 | 0.5477 | 0.8962 | 0.8921 | 0.8267 | 0.8786 | 0.6847 | 0.7770 | 0.4611 | 0.8168 | 0.7270 | 0.7864 | | 0.1091 | 17.57 | 6500 | 0.4763 | 0.7287 | 0.8306 | 0.8907 | 0.9489 | 0.8213 | 0.8759 | 0.5751 | 0.8972 | 0.8800 | 0.8160 | 0.8811 | 0.6868 | 0.7772 | 0.4570 | 0.8092 | 0.7115 | 0.7779 | | 0.4689 | 17.62 | 6520 | 0.4656 | 0.7248 | 0.8241 | 0.8897 | 0.9414 | 0.8117 | 0.8613 | 0.5546 | 0.9115 | 0.8665 | 0.8218 | 0.8764 | 0.6897 | 0.7702 | 0.4206 | 0.8104 | 0.7284 | 0.7778 | | 0.126 | 17.68 | 6540 | 0.4826 | 0.7219 | 0.8210 | 0.8898 | 0.9419 | 0.8082 | 0.8646 | 0.5306 | 0.9128 | 0.8680 | 0.8207 | 0.8767 | 0.6899 | 0.7575 | 0.4034 | 0.8148 | 0.7357 | 0.7754 | | 0.1497 | 17.73 | 6560 | 0.5100 | 0.7211 | 0.8117 | 0.8894 | 0.9488 | 0.8191 | 0.8843 | 0.4794 | 0.9243 | 0.8461 | 0.7800 | 0.8769 | 0.6926 | 0.7741 | 0.4011 | 0.8110 | 0.7498 | 0.7419 | | 0.1546 | 17.78 | 6580 | 0.4865 | 0.7209 | 0.8199 | 0.8887 | 0.9426 | 0.8249 | 0.8755 | 0.5045 | 0.9116 | 0.8858 | 0.7945 | 0.8804 | 0.6906 | 0.7710 | 0.4120 | 0.8074 | 0.7293 | 0.7560 | | 0.1639 | 17.84 | 6600 | 0.5077 | 0.7254 | 0.8190 | 0.8932 | 0.9488 | 0.8178 | 0.8878 | 0.4842 | 0.9149 | 0.8586 | 0.8208 | 0.8801 | 0.6900 | 0.7609 | 0.4069 | 0.8209 | 0.7346 | 0.7841 | | 0.3673 | 17.89 | 6620 | 0.5128 | 0.7242 | 0.8186 | 0.8915 | 0.9463 | 0.8168 | 0.8871 | 0.5067 | 0.9217 | 0.8563 | 0.7954 | 0.8810 | 0.6900 | 0.7684 | 0.4188 | 0.8171 | 0.7352 | 0.7591 | | 0.1978 | 17.95 | 6640 | 0.5142 | 0.7217 | 0.8170 | 0.8893 | 0.9472 | 0.8285 | 0.8718 | 0.4931 | 0.9168 | 0.8795 | 0.7824 | 0.8788 | 0.6864 | 0.7676 | 0.4239 | 0.8117 | 0.7397 | 0.7434 | | 0.1925 | 18.0 | 6660 | 0.5152 | 0.7298 | 0.8267 | 0.8925 | 0.9490 | 0.8242 | 0.8814 | 0.5430 | 0.9099 | 0.8722 | 0.8072 | 0.8773 | 0.6874 | 0.7708 | 0.4438 | 0.8181 | 0.7407 | 0.7707 | | 0.1153 | 18.05 | 6680 | 0.4866 | 0.7342 | 0.8234 | 0.8971 | 0.9492 | 0.7931 | 0.8895 | 0.5085 | 0.9157 | 0.8595 | 0.8483 | 0.8780 | 0.6837 | 0.7701 | 0.4364 | 0.8295 | 0.7339 | 0.8075 | | 0.0899 | 18.11 | 6700 | 0.4887 | 0.7338 | 0.8318 | 0.8960 | 0.9459 | 0.8300 | 0.8650 | 0.5456 | 0.9127 | 0.8827 | 0.8406 | 0.8802 | 0.6827 | 0.7705 | 0.4455 | 0.8274 | 0.7293 | 0.8008 | | 0.2589 | 18.16 | 6720 | 0.4892 | 0.7323 | 0.8291 | 0.8945 | 0.9486 | 0.8144 | 0.8825 | 0.5375 | 0.9046 | 0.8833 | 0.8332 | 0.8810 | 0.6896 | 0.7695 | 0.4447 | 0.8210 | 0.7255 | 0.7944 | | 0.1197 | 18.22 | 6740 | 0.5024 | 0.7249 | 0.8207 | 0.8899 | 0.9513 | 0.8139 | 0.8668 | 0.5448 | 0.9104 | 0.8543 | 0.8037 | 0.8796 | 0.6902 | 0.7629 | 0.4414 | 0.8111 | 0.7264 | 0.7629 | | 0.1703 | 18.27 | 6760 | 0.5068 | 0.7253 | 0.8255 | 0.8890 | 0.9485 | 0.8050 | 0.8485 | 0.5965 | 0.9106 | 0.8742 | 0.7949 | 0.8806 | 0.6880 | 0.7666 | 0.4513 | 0.8089 | 0.7253 | 0.7563 | | 0.1416 | 18.32 | 6780 | 0.5116 | 0.7238 | 0.8234 | 0.8890 | 0.9468 | 0.8031 | 0.8735 | 0.5668 | 0.9109 | 0.8755 | 0.7871 | 0.8816 | 0.6887 | 0.7692 | 0.4442 | 0.8105 | 0.7240 | 0.7484 | | 0.2357 | 18.38 | 6800 | 0.4785 | 0.7263 | 0.8257 | 0.8907 | 0.9481 | 0.8063 | 0.8624 | 0.5775 | 0.9122 | 0.8718 | 0.8016 | 0.8816 | 0.6879 | 0.7647 | 0.4400 | 0.8146 | 0.7345 | 0.7604 | | 0.1692 | 18.43 | 6820 | 0.4507 | 0.7336 | 0.8321 | 0.8952 | 0.9472 | 0.8108 | 0.8720 | 0.5805 | 0.9127 | 0.8687 | 0.8325 | 0.8811 | 0.6865 | 0.7715 | 0.4468 | 0.8257 | 0.7338 | 0.7898 | | 0.1097 | 18.49 | 6840 | 0.4794 | 0.7333 | 0.8299 | 0.8953 | 0.9486 | 0.7926 | 0.8754 | 0.5718 | 0.9088 | 0.8714 | 0.8407 | 0.8762 | 0.6819 | 0.7697 | 0.4426 | 0.8279 | 0.7329 | 0.8021 | | 0.1896 | 18.54 | 6860 | 0.5163 | 0.7141 | 0.8230 | 0.8832 | 0.9360 | 0.7966 | 0.8850 | 0.5660 | 0.8949 | 0.8898 | 0.7928 | 0.8737 | 0.6814 | 0.7653 | 0.4298 | 0.8035 | 0.7004 | 0.7450 | | 0.1161 | 18.59 | 6880 | 0.5188 | 0.7164 | 0.8111 | 0.8873 | 0.9524 | 0.7999 | 0.8748 | 0.5150 | 0.9185 | 0.8481 | 0.7693 | 0.8763 | 0.6808 | 0.7721 | 0.4248 | 0.8156 | 0.7120 | 0.7332 | | 0.1457 | 18.65 | 6900 | 0.5094 | 0.7163 | 0.8214 | 0.8858 | 0.9379 | 0.8260 | 0.8810 | 0.5706 | 0.9231 | 0.8455 | 0.7658 | 0.8780 | 0.6832 | 0.7630 | 0.4432 | 0.8157 | 0.7021 | 0.7291 | | 0.074 | 18.7 | 6920 | 0.5089 | 0.7152 | 0.8230 | 0.8845 | 0.9461 | 0.8254 | 0.8721 | 0.5581 | 0.8992 | 0.8954 | 0.7646 | 0.8793 | 0.6885 | 0.7624 | 0.4443 | 0.8106 | 0.6965 | 0.7247 | | 0.0944 | 18.76 | 6940 | 0.5005 | 0.7185 | 0.8221 | 0.8853 | 0.9334 | 0.8362 | 0.8707 | 0.5501 | 0.9182 | 0.8642 | 0.7822 | 0.8718 | 0.6885 | 0.7623 | 0.4349 | 0.8074 | 0.7200 | 0.7444 | | 0.1743 | 18.81 | 6960 | 0.5052 | 0.7221 | 0.8193 | 0.8884 | 0.9463 | 0.8099 | 0.8844 | 0.5297 | 0.9108 | 0.8656 | 0.7886 | 0.8747 | 0.6858 | 0.7627 | 0.4372 | 0.8120 | 0.7296 | 0.7527 | | 0.0786 | 18.86 | 6980 | 0.5417 | 0.7102 | 0.8172 | 0.8817 | 0.9322 | 0.8292 | 0.8613 | 0.5575 | 0.9211 | 0.8579 | 0.7613 | 0.8718 | 0.6816 | 0.7575 | 0.4297 | 0.8062 | 0.6984 | 0.7265 | | 0.1497 | 18.92 | 7000 | 0.4958 | 0.7216 | 0.8254 | 0.8864 | 0.9435 | 0.8290 | 0.8716 | 0.5538 | 0.8988 | 0.8889 | 0.7922 | 0.8717 | 0.6840 | 0.7687 | 0.4485 | 0.8067 | 0.7188 | 0.7530 | | 0.1378 | 18.97 | 7020 | 0.5228 | 0.7259 | 0.8238 | 0.8902 | 0.9509 | 0.8141 | 0.8692 | 0.5612 | 0.9107 | 0.8689 | 0.7917 | 0.8786 | 0.6847 | 0.7734 | 0.4483 | 0.8139 | 0.7239 | 0.7587 | | 0.1311 | 19.03 | 7040 | 0.4866 | 0.7274 | 0.8310 | 0.8900 | 0.9439 | 0.8333 | 0.8735 | 0.5742 | 0.9005 | 0.8708 | 0.8211 | 0.8732 | 0.6860 | 0.7738 | 0.4400 | 0.8116 | 0.7229 | 0.7844 | | 0.1454 | 19.08 | 7060 | 0.4845 | 0.7317 | 0.8286 | 0.8929 | 0.9456 | 0.8190 | 0.8716 | 0.5517 | 0.9063 | 0.8684 | 0.8378 | 0.8727 | 0.6878 | 0.7746 | 0.4425 | 0.8183 | 0.7280 | 0.7979 | | 0.1215 | 19.14 | 7080 | 0.5578 | 0.7173 | 0.8215 | 0.8844 | 0.9362 | 0.8271 | 0.8733 | 0.5648 | 0.9180 | 0.8708 | 0.7605 | 0.8748 | 0.6848 | 0.7757 | 0.4526 | 0.8074 | 0.7023 | 0.7238 | | 0.1733 | 19.19 | 7100 | 0.5758 | 0.7151 | 0.8126 | 0.8840 | 0.9434 | 0.8032 | 0.8686 | 0.5321 | 0.9184 | 0.8657 | 0.7569 | 0.8735 | 0.6836 | 0.7695 | 0.4483 | 0.8074 | 0.7009 | 0.7227 | | 0.0956 | 19.24 | 7120 | 0.5478 | 0.7163 | 0.8184 | 0.8849 | 0.9477 | 0.8340 | 0.8616 | 0.5424 | 0.9115 | 0.8760 | 0.7557 | 0.8757 | 0.6866 | 0.7741 | 0.4477 | 0.8107 | 0.6961 | 0.7229 | | 0.1144 | 19.3 | 7140 | 0.5789 | 0.7141 | 0.8140 | 0.8856 | 0.9517 | 0.8162 | 0.8838 | 0.5168 | 0.9127 | 0.8732 | 0.7437 | 0.8764 | 0.6866 | 0.7756 | 0.4273 | 0.8163 | 0.7046 | 0.7121 | | 0.0828 | 19.35 | 7160 | 0.5795 | 0.7170 | 0.8156 | 0.8876 | 0.9472 | 0.8291 | 0.8759 | 0.5119 | 0.9226 | 0.8665 | 0.7564 | 0.8813 | 0.6862 | 0.7769 | 0.4294 | 0.8160 | 0.7058 | 0.7231 | | 0.0687 | 19.41 | 7180 | 0.5686 | 0.7154 | 0.8117 | 0.8865 | 0.9477 | 0.7988 | 0.8613 | 0.5266 | 0.9248 | 0.8658 | 0.7568 | 0.8813 | 0.6824 | 0.7724 | 0.4327 | 0.8126 | 0.7030 | 0.7233 | | 0.1395 | 19.46 | 7200 | 0.5835 | 0.7151 | 0.8160 | 0.8849 | 0.9492 | 0.8137 | 0.8674 | 0.5327 | 0.9087 | 0.8845 | 0.7556 | 0.8781 | 0.6850 | 0.7740 | 0.4411 | 0.8095 | 0.6974 | 0.7206 | | 0.1072 | 19.51 | 7220 | 0.5384 | 0.7129 | 0.8160 | 0.8842 | 0.9497 | 0.8382 | 0.8495 | 0.5469 | 0.9164 | 0.8634 | 0.7483 | 0.8757 | 0.6841 | 0.7732 | 0.4277 | 0.8118 | 0.7020 | 0.7156 | | 0.0705 | 19.57 | 7240 | 0.5240 | 0.7166 | 0.8148 | 0.8862 | 0.9510 | 0.8087 | 0.8751 | 0.5294 | 0.9124 | 0.8659 | 0.7612 | 0.8788 | 0.6908 | 0.7709 | 0.4341 | 0.8119 | 0.7011 | 0.7285 | | 0.1924 | 19.62 | 7260 | 0.5222 | 0.7182 | 0.8179 | 0.8869 | 0.9494 | 0.8363 | 0.8751 | 0.5218 | 0.9142 | 0.8682 | 0.7605 | 0.8798 | 0.6900 | 0.7724 | 0.4420 | 0.8130 | 0.7035 | 0.7267 | | 0.0711 | 19.68 | 7280 | 0.5092 | 0.7208 | 0.8212 | 0.8879 | 0.9444 | 0.8343 | 0.8651 | 0.5372 | 0.9173 | 0.8744 | 0.7753 | 0.8823 | 0.6886 | 0.7731 | 0.4464 | 0.8114 | 0.7062 | 0.7378 | | 0.1801 | 19.73 | 7300 | 0.4822 | 0.7299 | 0.8289 | 0.8926 | 0.9455 | 0.8012 | 0.8879 | 0.5815 | 0.9108 | 0.8541 | 0.8213 | 0.8809 | 0.6865 | 0.7735 | 0.4426 | 0.8163 | 0.7270 | 0.7827 | | 0.1059 | 19.78 | 7320 | 0.4786 | 0.7355 | 0.8341 | 0.8964 | 0.9465 | 0.8363 | 0.8677 | 0.5692 | 0.9157 | 0.8635 | 0.8394 | 0.8792 | 0.6817 | 0.7738 | 0.4513 | 0.8289 | 0.7307 | 0.8030 | | 1.8411 | 19.84 | 7340 | 0.5175 | 0.7341 | 0.8313 | 0.8952 | 0.9529 | 0.8020 | 0.8914 | 0.5669 | 0.9008 | 0.8711 | 0.8339 | 0.8795 | 0.6810 | 0.7727 | 0.4574 | 0.8237 | 0.7244 | 0.7998 | | 1.7101 | 19.89 | 7360 | 0.5295 | 0.7327 | 0.8368 | 0.8942 | 0.9439 | 0.8316 | 0.8818 | 0.5814 | 0.9034 | 0.8815 | 0.8340 | 0.8802 | 0.6786 | 0.7720 | 0.4525 | 0.8207 | 0.7249 | 0.8003 | | 0.089 | 19.95 | 7380 | 0.4847 | 0.7316 | 0.8318 | 0.8943 | 0.9423 | 0.7977 | 0.8840 | 0.5653 | 0.9044 | 0.8823 | 0.8468 | 0.8800 | 0.6732 | 0.7581 | 0.4550 | 0.8217 | 0.7251 | 0.8079 | | 0.2122 | 20.0 | 7400 | 0.4957 | 0.7262 | 0.8322 | 0.8912 | 0.9466 | 0.7874 | 0.8737 | 0.6017 | 0.8921 | 0.8826 | 0.8415 | 0.8802 | 0.6705 | 0.7617 | 0.4411 | 0.8144 | 0.7114 | 0.8046 | | 0.1163 | 20.05 | 7420 | 0.4751 | 0.7302 | 0.8361 | 0.8931 | 0.9460 | 0.8175 | 0.8749 | 0.5942 | 0.8950 | 0.8753 | 0.8495 | 0.8800 | 0.6827 | 0.7634 | 0.4388 | 0.8189 | 0.7180 | 0.8099 | | 0.1322 | 20.11 | 7440 | 0.4809 | 0.7343 | 0.8363 | 0.8949 | 0.9439 | 0.8231 | 0.8823 | 0.5715 | 0.9006 | 0.8841 | 0.8487 | 0.8793 | 0.6789 | 0.7647 | 0.4623 | 0.8228 | 0.7209 | 0.8108 | | 0.163 | 20.16 | 7460 | 0.4699 | 0.7318 | 0.8255 | 0.8947 | 0.9435 | 0.7919 | 0.8711 | 0.5605 | 0.9224 | 0.8536 | 0.8358 | 0.8808 | 0.6741 | 0.7648 | 0.4605 | 0.8241 | 0.7274 | 0.7912 | | 0.0845 | 20.22 | 7480 | 0.4678 | 0.7330 | 0.8274 | 0.8957 | 0.9472 | 0.8095 | 0.8836 | 0.5501 | 0.9185 | 0.8492 | 0.8339 | 0.8821 | 0.6801 | 0.7699 | 0.4498 | 0.8245 | 0.7279 | 0.7967 | | 0.066 | 20.27 | 7500 | 0.4627 | 0.7390 | 0.8320 | 0.8987 | 0.9465 | 0.7927 | 0.8875 | 0.5570 | 0.9104 | 0.8529 | 0.8768 | 0.8846 | 0.6828 | 0.7732 | 0.4529 | 0.8257 | 0.7332 | 0.8208 | | 0.2855 | 20.32 | 7520 | 0.4541 | 0.7441 | 0.8336 | 0.9017 | 0.9494 | 0.8038 | 0.8756 | 0.5441 | 0.9161 | 0.8675 | 0.8787 | 0.8845 | 0.6878 | 0.7759 | 0.4540 | 0.8331 | 0.7373 | 0.8360 | | 0.1623 | 20.38 | 7540 | 0.4726 | 0.7357 | 0.8248 | 0.8983 | 0.9451 | 0.8020 | 0.8860 | 0.5050 | 0.9240 | 0.8613 | 0.8501 | 0.8830 | 0.6824 | 0.7736 | 0.4379 | 0.8286 | 0.7315 | 0.8126 | | 0.0805 | 20.43 | 7560 | 0.4518 | 0.7355 | 0.8309 | 0.8982 | 0.9473 | 0.8429 | 0.8821 | 0.5069 | 0.9125 | 0.8675 | 0.8574 | 0.8834 | 0.6829 | 0.7742 | 0.4322 | 0.8270 | 0.7295 | 0.8195 | | 0.0892 | 20.49 | 7580 | 0.4647 | 0.7269 | 0.8204 | 0.8926 | 0.9485 | 0.7750 | 0.8801 | 0.5361 | 0.9138 | 0.8776 | 0.8120 | 0.8845 | 0.6764 | 0.7730 | 0.4364 | 0.8140 | 0.7282 | 0.7756 | | 0.0479 | 20.54 | 7600 | 0.4994 | 0.7207 | 0.8179 | 0.8888 | 0.9431 | 0.7955 | 0.8775 | 0.5400 | 0.9193 | 0.8557 | 0.7943 | 0.8803 | 0.6697 | 0.7697 | 0.4320 | 0.8091 | 0.7286 | 0.7554 | | 0.0717 | 20.59 | 7620 | 0.4951 | 0.7279 | 0.8261 | 0.8926 | 0.9488 | 0.8292 | 0.8698 | 0.5462 | 0.9146 | 0.8661 | 0.8081 | 0.8810 | 0.6859 | 0.7741 | 0.4324 | 0.8186 | 0.7327 | 0.7705 | | 0.4134 | 20.65 | 7640 | 0.5497 | 0.7189 | 0.8202 | 0.8875 | 0.9474 | 0.8428 | 0.8764 | 0.5258 | 0.9155 | 0.8723 | 0.7611 | 0.8812 | 0.6859 | 0.7786 | 0.4465 | 0.8147 | 0.6999 | 0.7256 | | 1.3472 | 20.7 | 7660 | 0.5462 | 0.7156 | 0.8162 | 0.8852 | 0.9531 | 0.7985 | 0.8820 | 0.5465 | 0.9061 | 0.8840 | 0.7435 | 0.8804 | 0.6862 | 0.7779 | 0.4522 | 0.8126 | 0.6903 | 0.7097 | | 0.1087 | 20.76 | 7680 | 0.5607 | 0.7188 | 0.8218 | 0.8866 | 0.9443 | 0.8418 | 0.8655 | 0.5463 | 0.9164 | 0.8772 | 0.7607 | 0.8812 | 0.6848 | 0.7749 | 0.4559 | 0.8126 | 0.6976 | 0.7243 | | 0.0687 | 20.81 | 7700 | 0.5686 | 0.7147 | 0.8184 | 0.8846 | 0.9454 | 0.8356 | 0.8812 | 0.5270 | 0.9111 | 0.8838 | 0.7445 | 0.8796 | 0.6874 | 0.7739 | 0.4499 | 0.8118 | 0.6902 | 0.7102 | | 0.1466 | 20.86 | 7720 | 0.5114 | 0.7294 | 0.8251 | 0.8930 | 0.9485 | 0.8343 | 0.8852 | 0.5219 | 0.9128 | 0.8603 | 0.8128 | 0.8810 | 0.6882 | 0.7740 | 0.4464 | 0.8180 | 0.7238 | 0.7742 | | 0.1206 | 20.92 | 7740 | 0.4843 | 0.7334 | 0.8314 | 0.8953 | 0.9473 | 0.8198 | 0.8793 | 0.5627 | 0.9114 | 0.8636 | 0.8355 | 0.8820 | 0.6848 | 0.7750 | 0.4489 | 0.8235 | 0.7251 | 0.7947 | | 0.2114 | 20.97 | 7760 | 0.4590 | 0.7344 | 0.8253 | 0.8974 | 0.9516 | 0.8009 | 0.8882 | 0.5215 | 0.9138 | 0.8504 | 0.8506 | 0.8810 | 0.6843 | 0.7767 | 0.4363 | 0.8288 | 0.7240 | 0.8093 | | 0.1077 | 21.03 | 7780 | 0.4746 | 0.7342 | 0.8265 | 0.8976 | 0.9494 | 0.8226 | 0.8748 | 0.5081 | 0.9160 | 0.8608 | 0.8537 | 0.8809 | 0.6863 | 0.7763 | 0.4303 | 0.8306 | 0.7256 | 0.8096 | | 0.1077 | 21.08 | 7800 | 0.4745 | 0.7311 | 0.8242 | 0.8961 | 0.9476 | 0.8301 | 0.8801 | 0.4826 | 0.9151 | 0.8762 | 0.8378 | 0.8810 | 0.6883 | 0.7719 | 0.4254 | 0.8270 | 0.7257 | 0.7984 | | 0.1559 | 21.14 | 7820 | 0.5023 | 0.7318 | 0.8280 | 0.8947 | 0.9495 | 0.8289 | 0.8806 | 0.5277 | 0.9099 | 0.8731 | 0.8260 | 0.8812 | 0.6861 | 0.7727 | 0.4480 | 0.8230 | 0.7207 | 0.7907 | | 0.1022 | 21.19 | 7840 | 0.4712 | 0.7370 | 0.8337 | 0.8973 | 0.9458 | 0.8162 | 0.8607 | 0.5465 | 0.9025 | 0.8809 | 0.8833 | 0.8847 | 0.6849 | 0.7667 | 0.4551 | 0.8237 | 0.7311 | 0.8129 | | 0.1382 | 21.24 | 7860 | 0.5166 | 0.7272 | 0.8274 | 0.8935 | 0.9405 | 0.7601 | 0.8616 | 0.5688 | 0.9002 | 0.8737 | 0.8868 | 0.8843 | 0.6504 | 0.7605 | 0.4541 | 0.8203 | 0.7298 | 0.7908 | | 0.0495 | 21.3 | 7880 | 0.4990 | 0.7369 | 0.8308 | 0.8973 | 0.9487 | 0.8126 | 0.8728 | 0.5573 | 0.9147 | 0.8584 | 0.8513 | 0.8814 | 0.6874 | 0.7736 | 0.4531 | 0.8279 | 0.7242 | 0.8110 | | 0.1309 | 21.35 | 7900 | 0.4653 | 0.7381 | 0.8351 | 0.8976 | 0.9481 | 0.8190 | 0.8834 | 0.5699 | 0.9090 | 0.8629 | 0.8532 | 0.8817 | 0.6906 | 0.7731 | 0.4505 | 0.8276 | 0.7302 | 0.8131 | | 0.2034 | 21.41 | 7920 | 0.4758 | 0.7366 | 0.8351 | 0.8974 | 0.9500 | 0.8361 | 0.8535 | 0.5739 | 0.9116 | 0.8660 | 0.8544 | 0.8817 | 0.6902 | 0.7708 | 0.4376 | 0.8287 | 0.7333 | 0.8137 | | 0.1587 | 21.46 | 7940 | 0.5065 | 0.7307 | 0.8212 | 0.8946 | 0.9437 | 0.8057 | 0.8684 | 0.4995 | 0.9178 | 0.8615 | 0.8518 | 0.8720 | 0.6906 | 0.7699 | 0.4165 | 0.8246 | 0.7335 | 0.8082 | | 0.0888 | 21.51 | 7960 | 0.4859 | 0.7338 | 0.8299 | 0.8955 | 0.9436 | 0.8250 | 0.8752 | 0.5298 | 0.9110 | 0.8749 | 0.8498 | 0.8767 | 0.6903 | 0.7732 | 0.4290 | 0.8233 | 0.7330 | 0.8110 | | 0.0978 | 21.57 | 7980 | 0.4848 | 0.7359 | 0.8304 | 0.8978 | 0.9515 | 0.8059 | 0.8657 | 0.5750 | 0.9189 | 0.8503 | 0.8455 | 0.8824 | 0.6873 | 0.7733 | 0.4339 | 0.8311 | 0.7345 | 0.8089 | | 0.0727 | 21.62 | 8000 | 0.4940 | 0.7323 | 0.8296 | 0.8940 | 0.9436 | 0.8231 | 0.8738 | 0.5583 | 0.9166 | 0.8654 | 0.8261 | 0.8786 | 0.6888 | 0.7712 | 0.4435 | 0.8212 | 0.7337 | 0.7888 | | 0.1287 | 21.68 | 8020 | 0.4959 | 0.7303 | 0.8277 | 0.8932 | 0.9487 | 0.8190 | 0.8762 | 0.5403 | 0.9082 | 0.8895 | 0.8121 | 0.8815 | 0.6877 | 0.7773 | 0.4434 | 0.8181 | 0.7307 | 0.7734 | | 0.1429 | 21.73 | 8040 | 0.5274 | 0.7235 | 0.8189 | 0.8896 | 0.9513 | 0.8038 | 0.8817 | 0.5183 | 0.9073 | 0.8827 | 0.7872 | 0.8769 | 0.6856 | 0.7755 | 0.4339 | 0.8130 | 0.7287 | 0.7506 | | 0.1819 | 21.78 | 8060 | 0.5184 | 0.7263 | 0.8218 | 0.8910 | 0.9511 | 0.8180 | 0.8602 | 0.5402 | 0.9152 | 0.8762 | 0.7918 | 0.8794 | 0.6872 | 0.7768 | 0.4386 | 0.8150 | 0.7319 | 0.7554 | | 0.0709 | 21.84 | 8080 | 0.5355 | 0.7296 | 0.8309 | 0.8923 | 0.9489 | 0.8168 | 0.8815 | 0.5756 | 0.9042 | 0.8791 | 0.8099 | 0.8817 | 0.6847 | 0.7772 | 0.4445 | 0.8156 | 0.7308 | 0.7727 | | 0.0816 | 21.89 | 8100 | 0.5323 | 0.7296 | 0.8238 | 0.8923 | 0.9496 | 0.8143 | 0.8752 | 0.5390 | 0.9141 | 0.8707 | 0.8034 | 0.8808 | 0.6859 | 0.7751 | 0.4512 | 0.8144 | 0.7340 | 0.7659 | | 0.1847 | 21.95 | 8120 | 0.5215 | 0.7275 | 0.8262 | 0.8911 | 0.9489 | 0.8174 | 0.8788 | 0.5486 | 0.9071 | 0.8862 | 0.7963 | 0.8817 | 0.6863 | 0.7767 | 0.4441 | 0.8117 | 0.7327 | 0.7591 | | 0.1339 | 22.0 | 8140 | 0.5006 | 0.7283 | 0.8253 | 0.8921 | 0.9512 | 0.8137 | 0.8821 | 0.5339 | 0.9030 | 0.8784 | 0.8145 | 0.8812 | 0.6868 | 0.7751 | 0.4375 | 0.8135 | 0.7309 | 0.7728 | | 0.0847 | 22.05 | 8160 | 0.4968 | 0.7298 | 0.8271 | 0.8924 | 0.9460 | 0.8266 | 0.8722 | 0.5422 | 0.9123 | 0.8792 | 0.8115 | 0.8821 | 0.6888 | 0.7764 | 0.4448 | 0.8138 | 0.7306 | 0.7720 | | 0.0571 | 22.11 | 8180 | 0.4889 | 0.7311 | 0.8259 | 0.8939 | 0.9491 | 0.8004 | 0.8833 | 0.5535 | 0.9120 | 0.8564 | 0.8265 | 0.8802 | 0.6874 | 0.7744 | 0.4399 | 0.8190 | 0.7283 | 0.7888 | | 0.1909 | 22.16 | 8200 | 0.4716 | 0.7357 | 0.8323 | 0.8961 | 0.9433 | 0.8261 | 0.8848 | 0.5429 | 0.9104 | 0.8654 | 0.8531 | 0.8780 | 0.6900 | 0.7768 | 0.4410 | 0.8239 | 0.7298 | 0.8108 | | 0.1584 | 22.22 | 8220 | 0.4890 | 0.7345 | 0.8322 | 0.8965 | 0.9470 | 0.8401 | 0.8787 | 0.5238 | 0.9065 | 0.8783 | 0.8510 | 0.8799 | 0.6863 | 0.7761 | 0.4362 | 0.8256 | 0.7287 | 0.8089 | | 0.0863 | 22.27 | 8240 | 0.4794 | 0.7362 | 0.8298 | 0.8971 | 0.9443 | 0.8112 | 0.8771 | 0.5387 | 0.9171 | 0.8710 | 0.8490 | 0.8813 | 0.6875 | 0.7766 | 0.4446 | 0.8262 | 0.7294 | 0.8079 | | 0.1098 | 22.32 | 8260 | 0.4889 | 0.7358 | 0.8326 | 0.8969 | 0.9479 | 0.8067 | 0.8873 | 0.5532 | 0.9065 | 0.8787 | 0.8480 | 0.8827 | 0.6879 | 0.7747 | 0.4424 | 0.8263 | 0.7292 | 0.8073 | | 0.6616 | 22.38 | 8280 | 0.4827 | 0.7333 | 0.8269 | 0.8959 | 0.9451 | 0.8194 | 0.8780 | 0.5096 | 0.9140 | 0.8795 | 0.8425 | 0.8791 | 0.6864 | 0.7750 | 0.4354 | 0.8244 | 0.7290 | 0.8041 | | 0.1257 | 22.43 | 8300 | 0.4592 | 0.7347 | 0.8258 | 0.8973 | 0.9443 | 0.8158 | 0.8731 | 0.5141 | 0.9235 | 0.8590 | 0.8504 | 0.8826 | 0.6868 | 0.7757 | 0.4375 | 0.8268 | 0.7287 | 0.8049 | | 0.1024 | 22.49 | 8320 | 0.4701 | 0.7340 | 0.8343 | 0.8961 | 0.9460 | 0.8369 | 0.8847 | 0.5437 | 0.9043 | 0.8733 | 0.8511 | 0.8798 | 0.6822 | 0.7749 | 0.4400 | 0.8263 | 0.7303 | 0.8047 | | 0.1011 | 22.54 | 8340 | 0.4936 | 0.7335 | 0.8309 | 0.8959 | 0.9481 | 0.8337 | 0.8731 | 0.5376 | 0.9093 | 0.8683 | 0.8464 | 0.8797 | 0.6857 | 0.7738 | 0.4346 | 0.8251 | 0.7298 | 0.8059 | | 0.0516 | 22.59 | 8360 | 0.4869 | 0.7326 | 0.8288 | 0.8961 | 0.9448 | 0.8347 | 0.8800 | 0.5264 | 0.9161 | 0.8491 | 0.8507 | 0.8789 | 0.6832 | 0.7724 | 0.4310 | 0.8275 | 0.7308 | 0.8042 | | 0.1104 | 22.65 | 8380 | 0.4936 | 0.7317 | 0.8298 | 0.8952 | 0.9483 | 0.8160 | 0.8723 | 0.5306 | 0.9029 | 0.8937 | 0.8448 | 0.8789 | 0.6822 | 0.7650 | 0.4344 | 0.8252 | 0.7303 | 0.8058 | | 0.1515 | 22.7 | 8400 | 0.5197 | 0.7314 | 0.8288 | 0.8948 | 0.9487 | 0.8178 | 0.8653 | 0.5395 | 0.9061 | 0.8781 | 0.8458 | 0.8766 | 0.6829 | 0.7636 | 0.4360 | 0.8260 | 0.7312 | 0.8038 | | 0.0403 | 22.76 | 8420 | 0.4937 | 0.7347 | 0.8305 | 0.8969 | 0.9452 | 0.8229 | 0.8638 | 0.5490 | 0.9198 | 0.8681 | 0.8451 | 0.8807 | 0.6794 | 0.7733 | 0.4430 | 0.8290 | 0.7304 | 0.8070 | | 1.5074 | 22.81 | 8440 | 0.5201 | 0.7332 | 0.8294 | 0.8962 | 0.9507 | 0.8137 | 0.8889 | 0.5541 | 0.9109 | 0.8433 | 0.8445 | 0.8806 | 0.6811 | 0.7740 | 0.4368 | 0.8266 | 0.7263 | 0.8072 | | 0.0575 | 22.86 | 8460 | 0.4946 | 0.7356 | 0.8293 | 0.8978 | 0.9470 | 0.8177 | 0.8777 | 0.5362 | 0.9193 | 0.8592 | 0.8478 | 0.8826 | 0.6831 | 0.7778 | 0.4380 | 0.8289 | 0.7305 | 0.8080 | | 0.1483 | 22.92 | 8480 | 0.4933 | 0.7365 | 0.8306 | 0.8981 | 0.9493 | 0.8194 | 0.8779 | 0.5361 | 0.9146 | 0.8672 | 0.8493 | 0.8827 | 0.6838 | 0.7815 | 0.4397 | 0.8293 | 0.7306 | 0.8081 | | 0.2198 | 22.97 | 8500 | 0.4894 | 0.7346 | 0.8313 | 0.8973 | 0.9471 | 0.8354 | 0.8829 | 0.5255 | 0.9126 | 0.8680 | 0.8479 | 0.8822 | 0.6844 | 0.7811 | 0.4307 | 0.8278 | 0.7287 | 0.8074 | | 1.6629 | 23.03 | 8520 | 0.5074 | 0.7312 | 0.8291 | 0.8957 | 0.9494 | 0.8407 | 0.8781 | 0.5232 | 0.9102 | 0.8602 | 0.8420 | 0.8799 | 0.6816 | 0.7784 | 0.4224 | 0.8257 | 0.7245 | 0.8059 | | 0.3469 | 23.08 | 8540 | 0.5050 | 0.7338 | 0.8314 | 0.8964 | 0.9476 | 0.8351 | 0.8707 | 0.5335 | 0.9105 | 0.8780 | 0.8448 | 0.8807 | 0.6820 | 0.7786 | 0.4365 | 0.8265 | 0.7256 | 0.8069 | | 0.1627 | 23.14 | 8560 | 0.5105 | 0.7361 | 0.8327 | 0.8973 | 0.9436 | 0.8144 | 0.8788 | 0.5583 | 0.9182 | 0.8719 | 0.8435 | 0.8836 | 0.6845 | 0.7798 | 0.4444 | 0.8275 | 0.7269 | 0.8060 | | 0.1096 | 23.19 | 8580 | 0.4955 | 0.7355 | 0.8314 | 0.8973 | 0.9506 | 0.8110 | 0.8726 | 0.5601 | 0.9119 | 0.8669 | 0.8469 | 0.8834 | 0.6862 | 0.7785 | 0.4377 | 0.8279 | 0.7282 | 0.8066 | | 0.1026 | 23.24 | 8600 | 0.4955 | 0.7323 | 0.8300 | 0.8956 | 0.9497 | 0.8230 | 0.8783 | 0.5401 | 0.9076 | 0.8714 | 0.8396 | 0.8816 | 0.6843 | 0.7779 | 0.4314 | 0.8238 | 0.7269 | 0.8006 | | 0.1823 | 23.3 | 8620 | 0.5105 | 0.7319 | 0.8229 | 0.8964 | 0.9527 | 0.7888 | 0.8684 | 0.5296 | 0.9149 | 0.8600 | 0.8461 | 0.8811 | 0.6804 | 0.7747 | 0.4278 | 0.8266 | 0.7248 | 0.8077 | | 0.0839 | 23.35 | 8640 | 0.4857 | 0.7369 | 0.8322 | 0.8980 | 0.9513 | 0.8202 | 0.8842 | 0.5373 | 0.9076 | 0.8783 | 0.8469 | 0.8820 | 0.6850 | 0.7779 | 0.4430 | 0.8295 | 0.7327 | 0.8081 | | 1.5986 | 23.41 | 8660 | 0.4855 | 0.7366 | 0.8314 | 0.8975 | 0.9477 | 0.8023 | 0.8785 | 0.5457 | 0.9107 | 0.8914 | 0.8435 | 0.8843 | 0.6836 | 0.7773 | 0.4484 | 0.8254 | 0.7304 | 0.8067 | | 0.3694 | 23.46 | 8680 | 0.4914 | 0.7356 | 0.8303 | 0.8969 | 0.9478 | 0.8157 | 0.8679 | 0.5481 | 0.9151 | 0.8726 | 0.8448 | 0.8812 | 0.6850 | 0.7763 | 0.4425 | 0.8267 | 0.7301 | 0.8072 | | 0.1776 | 23.51 | 8700 | 0.5106 | 0.7317 | 0.8257 | 0.8954 | 0.9507 | 0.8110 | 0.8780 | 0.5400 | 0.9166 | 0.8563 | 0.8271 | 0.8811 | 0.6848 | 0.7756 | 0.4359 | 0.8254 | 0.7271 | 0.7919 | | 0.1083 | 23.57 | 8720 | 0.5244 | 0.7330 | 0.8318 | 0.8963 | 0.9504 | 0.8067 | 0.8760 | 0.5720 | 0.9094 | 0.8689 | 0.8395 | 0.8833 | 0.6826 | 0.7735 | 0.4324 | 0.8277 | 0.7296 | 0.8021 | | 0.1172 | 23.62 | 8740 | 0.5029 | 0.7349 | 0.8349 | 0.8971 | 0.9455 | 0.8120 | 0.8856 | 0.5657 | 0.9091 | 0.8821 | 0.8445 | 0.8829 | 0.6789 | 0.7751 | 0.4414 | 0.8296 | 0.7314 | 0.8050 | | 0.2092 | 23.68 | 8760 | 0.5062 | 0.7345 | 0.8291 | 0.8976 | 0.9482 | 0.8254 | 0.8779 | 0.5158 | 0.9140 | 0.8749 | 0.8475 | 0.8816 | 0.6852 | 0.7748 | 0.4316 | 0.8303 | 0.7315 | 0.8066 | | 0.1303 | 23.73 | 8780 | 0.4917 | 0.7335 | 0.8269 | 0.8972 | 0.9493 | 0.8173 | 0.8772 | 0.5136 | 0.9143 | 0.8688 | 0.8482 | 0.8813 | 0.6854 | 0.7736 | 0.4270 | 0.8292 | 0.7312 | 0.8071 | | 0.0973 | 23.78 | 8800 | 0.4735 | 0.7328 | 0.8246 | 0.8971 | 0.9486 | 0.7994 | 0.8713 | 0.5178 | 0.9174 | 0.8675 | 0.8503 | 0.8823 | 0.6831 | 0.7696 | 0.4276 | 0.8286 | 0.7307 | 0.8078 | | 0.0345 | 23.84 | 8820 | 0.4864 | 0.7340 | 0.8307 | 0.8968 | 0.9461 | 0.8355 | 0.8764 | 0.5321 | 0.9152 | 0.8630 | 0.8469 | 0.8813 | 0.6850 | 0.7722 | 0.4350 | 0.8281 | 0.7284 | 0.8077 | | 0.0633 | 23.89 | 8840 | 0.4794 | 0.7348 | 0.8293 | 0.8973 | 0.9488 | 0.8196 | 0.8739 | 0.5386 | 0.9163 | 0.8619 | 0.8460 | 0.8820 | 0.6875 | 0.7748 | 0.4346 | 0.8291 | 0.7282 | 0.8073 | | 0.1221 | 23.95 | 8860 | 0.4854 | 0.7341 | 0.8276 | 0.8977 | 0.9455 | 0.8285 | 0.8740 | 0.4943 | 0.9175 | 0.8860 | 0.8473 | 0.8828 | 0.6871 | 0.7735 | 0.4274 | 0.8292 | 0.7313 | 0.8076 | | 0.1367 | 24.0 | 8880 | 0.4951 | 0.7319 | 0.8221 | 0.8973 | 0.9494 | 0.8125 | 0.8741 | 0.4871 | 0.9196 | 0.8641 | 0.8477 | 0.8805 | 0.6860 | 0.7715 | 0.4154 | 0.8301 | 0.7340 | 0.8062 | | 0.7502 | 24.05 | 8900 | 0.4909 | 0.7298 | 0.8320 | 0.8932 | 0.9460 | 0.8350 | 0.8674 | 0.5635 | 0.9059 | 0.8742 | 0.8320 | 0.8790 | 0.6863 | 0.7723 | 0.4320 | 0.8199 | 0.7291 | 0.7903 | | 0.1402 | 24.11 | 8920 | 0.5295 | 0.7170 | 0.8211 | 0.8855 | 0.9478 | 0.8153 | 0.8786 | 0.5625 | 0.9056 | 0.8744 | 0.7637 | 0.8800 | 0.6860 | 0.7711 | 0.4365 | 0.8068 | 0.7109 | 0.7279 | | 0.038 | 24.16 | 8940 | 0.5261 | 0.7264 | 0.8230 | 0.8903 | 0.9474 | 0.8263 | 0.8773 | 0.5298 | 0.9129 | 0.8793 | 0.7880 | 0.8805 | 0.6871 | 0.7726 | 0.4501 | 0.8106 | 0.7320 | 0.7522 | | 0.0857 | 24.22 | 8960 | 0.4790 | 0.7347 | 0.8318 | 0.8958 | 0.9465 | 0.8269 | 0.8683 | 0.5592 | 0.9147 | 0.8729 | 0.8343 | 0.8811 | 0.6869 | 0.7735 | 0.4499 | 0.8252 | 0.7321 | 0.7940 | | 0.1291 | 24.27 | 8980 | 0.4926 | 0.7368 | 0.8317 | 0.8976 | 0.9499 | 0.8217 | 0.8780 | 0.5478 | 0.9128 | 0.8668 | 0.8449 | 0.8820 | 0.6877 | 0.7725 | 0.4458 | 0.8285 | 0.7337 | 0.8073 | | 0.5901 | 24.32 | 9000 | 0.4822 | 0.7359 | 0.8317 | 0.8971 | 0.9479 | 0.8328 | 0.8767 | 0.5522 | 0.9177 | 0.8523 | 0.8421 | 0.8824 | 0.6872 | 0.7758 | 0.4428 | 0.8272 | 0.7337 | 0.8025 | | 0.0284 | 24.38 | 9020 | 0.4928 | 0.7332 | 0.8271 | 0.8960 | 0.9467 | 0.8120 | 0.8768 | 0.5385 | 0.9202 | 0.8661 | 0.8294 | 0.8826 | 0.6865 | 0.7743 | 0.4377 | 0.8248 | 0.7344 | 0.7923 | | 0.122 | 24.43 | 9040 | 0.5095 | 0.7350 | 0.8300 | 0.8972 | 0.9482 | 0.8393 | 0.8769 | 0.5131 | 0.9118 | 0.8714 | 0.8491 | 0.8791 | 0.6875 | 0.7734 | 0.4324 | 0.8282 | 0.7349 | 0.8097 | | 0.1368 | 24.49 | 9060 | 0.4887 | 0.7373 | 0.8310 | 0.8979 | 0.9509 | 0.8169 | 0.8671 | 0.5441 | 0.9120 | 0.8758 | 0.8502 | 0.8806 | 0.6863 | 0.7738 | 0.4473 | 0.8297 | 0.7335 | 0.8101 | | 0.1202 | 24.54 | 9080 | 0.4913 | 0.7367 | 0.8323 | 0.8976 | 0.9497 | 0.8238 | 0.8740 | 0.5362 | 0.9076 | 0.8838 | 0.8511 | 0.8816 | 0.6876 | 0.7730 | 0.4440 | 0.8280 | 0.7331 | 0.8097 | | 0.0629 | 24.59 | 9100 | 0.4795 | 0.7346 | 0.8270 | 0.8968 | 0.9497 | 0.8101 | 0.8770 | 0.5290 | 0.9164 | 0.8715 | 0.8351 | 0.8835 | 0.6851 | 0.7746 | 0.4420 | 0.8248 | 0.7346 | 0.7978 | | 0.1315 | 24.65 | 9120 | 0.4952 | 0.7325 | 0.8224 | 0.8966 | 0.9506 | 0.8099 | 0.8813 | 0.4923 | 0.9164 | 0.8667 | 0.8395 | 0.8824 | 0.6846 | 0.7745 | 0.4277 | 0.8242 | 0.7342 | 0.7997 | | 0.2129 | 24.7 | 9140 | 0.4968 | 0.7344 | 0.8271 | 0.8974 | 0.9510 | 0.8294 | 0.8779 | 0.5091 | 0.9141 | 0.8605 | 0.8475 | 0.8810 | 0.6840 | 0.7730 | 0.4351 | 0.8280 | 0.7314 | 0.8082 | | 0.168 | 24.76 | 9160 | 0.5137 | 0.7335 | 0.8262 | 0.8968 | 0.9461 | 0.8301 | 0.8856 | 0.5063 | 0.9198 | 0.8505 | 0.8447 | 0.8820 | 0.6847 | 0.7697 | 0.4364 | 0.8256 | 0.7288 | 0.8075 | | 0.0502 | 24.81 | 9180 | 0.5090 | 0.7350 | 0.8312 | 0.8970 | 0.9469 | 0.8356 | 0.8721 | 0.5278 | 0.9132 | 0.8773 | 0.8452 | 0.8811 | 0.6841 | 0.7701 | 0.4437 | 0.8279 | 0.7315 | 0.8068 | | 0.1378 | 24.86 | 9200 | 0.5121 | 0.7337 | 0.8257 | 0.8957 | 0.9476 | 0.8067 | 0.8751 | 0.5338 | 0.9187 | 0.8660 | 0.8318 | 0.8803 | 0.6835 | 0.7748 | 0.4488 | 0.8244 | 0.7315 | 0.7927 | | 0.0819 | 24.92 | 9220 | 0.5272 | 0.7319 | 0.8235 | 0.8944 | 0.9514 | 0.7968 | 0.8776 | 0.5372 | 0.9139 | 0.8639 | 0.8234 | 0.8793 | 0.6825 | 0.7770 | 0.4485 | 0.8209 | 0.7290 | 0.7861 | | 0.1277 | 24.97 | 9240 | 0.5132 | 0.7352 | 0.8280 | 0.8964 | 0.9473 | 0.8034 | 0.8822 | 0.5324 | 0.9123 | 0.8748 | 0.8435 | 0.8793 | 0.6836 | 0.7767 | 0.4514 | 0.8267 | 0.7296 | 0.7991 | | 0.087 | 25.03 | 9260 | 0.5033 | 0.7351 | 0.8330 | 0.8962 | 0.9439 | 0.8327 | 0.8671 | 0.5619 | 0.9165 | 0.8616 | 0.8473 | 0.8787 | 0.6847 | 0.7770 | 0.4503 | 0.8294 | 0.7284 | 0.7975 | | 0.0817 | 25.08 | 9280 | 0.5268 | 0.7354 | 0.8276 | 0.8965 | 0.9438 | 0.8128 | 0.8767 | 0.5291 | 0.9204 | 0.8709 | 0.8398 | 0.8803 | 0.6882 | 0.7790 | 0.4510 | 0.8271 | 0.7277 | 0.7946 | | 0.0905 | 25.14 | 9300 | 0.5198 | 0.7356 | 0.8287 | 0.8963 | 0.9433 | 0.8233 | 0.8811 | 0.5311 | 0.9191 | 0.8581 | 0.8446 | 0.8787 | 0.6891 | 0.7795 | 0.4521 | 0.8269 | 0.7267 | 0.7961 | | 0.1844 | 25.19 | 9320 | 0.5282 | 0.7317 | 0.8267 | 0.8937 | 0.9456 | 0.8367 | 0.8760 | 0.5267 | 0.9169 | 0.8679 | 0.8172 | 0.8791 | 0.6878 | 0.7793 | 0.4516 | 0.8199 | 0.7301 | 0.7742 | | 0.159 | 25.24 | 9340 | 0.5340 | 0.7282 | 0.8215 | 0.8916 | 0.9480 | 0.8135 | 0.8779 | 0.5142 | 0.9118 | 0.8787 | 0.8061 | 0.8781 | 0.6852 | 0.7796 | 0.4499 | 0.8154 | 0.7303 | 0.7589 | | 0.1232 | 25.3 | 9360 | 0.5288 | 0.7300 | 0.8225 | 0.8928 | 0.9471 | 0.8042 | 0.8742 | 0.5269 | 0.9153 | 0.8757 | 0.8143 | 0.8789 | 0.6842 | 0.7784 | 0.4525 | 0.8183 | 0.7307 | 0.7671 | | 0.134 | 25.35 | 9380 | 0.5272 | 0.7337 | 0.8308 | 0.8954 | 0.9429 | 0.7984 | 0.8807 | 0.5682 | 0.9131 | 0.8641 | 0.8481 | 0.8780 | 0.6811 | 0.7769 | 0.4453 | 0.8269 | 0.7283 | 0.7992 | | 0.1115 | 25.41 | 9400 | 0.5284 | 0.7337 | 0.8310 | 0.8958 | 0.9434 | 0.8257 | 0.8750 | 0.5312 | 0.9109 | 0.8840 | 0.8469 | 0.8798 | 0.6814 | 0.7714 | 0.4502 | 0.8275 | 0.7278 | 0.7974 | | 0.1082 | 25.46 | 9420 | 0.5300 | 0.7355 | 0.8303 | 0.8968 | 0.9486 | 0.8197 | 0.8826 | 0.5354 | 0.9105 | 0.8704 | 0.8451 | 0.8800 | 0.6830 | 0.7702 | 0.4484 | 0.8271 | 0.7333 | 0.8064 | | 0.1735 | 25.51 | 9440 | 0.5159 | 0.7269 | 0.8239 | 0.8919 | 0.9422 | 0.7854 | 0.8821 | 0.5519 | 0.9150 | 0.8762 | 0.8148 | 0.8806 | 0.6676 | 0.7643 | 0.4498 | 0.8155 | 0.7345 | 0.7758 | | 0.1475 | 25.57 | 9460 | 0.5141 | 0.7288 | 0.8240 | 0.8940 | 0.9425 | 0.7976 | 0.8767 | 0.5369 | 0.9214 | 0.8673 | 0.8258 | 0.8811 | 0.6659 | 0.7633 | 0.4448 | 0.8214 | 0.7357 | 0.7892 | | 0.2059 | 25.62 | 9480 | 0.4976 | 0.7280 | 0.8187 | 0.8951 | 0.9471 | 0.7547 | 0.8860 | 0.5232 | 0.9192 | 0.8627 | 0.8376 | 0.8813 | 0.6513 | 0.7632 | 0.4429 | 0.8244 | 0.7344 | 0.7987 | | 0.0858 | 25.68 | 9500 | 0.5191 | 0.7331 | 0.8287 | 0.8965 | 0.9468 | 0.8149 | 0.8746 | 0.5336 | 0.9144 | 0.8707 | 0.8457 | 0.8809 | 0.6759 | 0.7698 | 0.4395 | 0.8272 | 0.7309 | 0.8076 | | 0.1239 | 25.73 | 9520 | 0.5073 | 0.7358 | 0.8287 | 0.8978 | 0.9476 | 0.8182 | 0.8683 | 0.5369 | 0.9213 | 0.8617 | 0.8467 | 0.8812 | 0.6814 | 0.7744 | 0.4447 | 0.8308 | 0.7315 | 0.8068 | | 0.0731 | 25.78 | 9540 | 0.5301 | 0.7362 | 0.8323 | 0.8972 | 0.9520 | 0.8311 | 0.8612 | 0.5475 | 0.9093 | 0.8804 | 0.8445 | 0.8796 | 0.6829 | 0.7744 | 0.4499 | 0.8296 | 0.7303 | 0.8064 | | 0.1221 | 25.84 | 9560 | 0.5168 | 0.7358 | 0.8271 | 0.8976 | 0.9454 | 0.8153 | 0.8718 | 0.5303 | 0.9263 | 0.8611 | 0.8397 | 0.8825 | 0.6843 | 0.7747 | 0.4438 | 0.8287 | 0.7332 | 0.8031 | | 0.2079 | 25.89 | 9580 | 0.5148 | 0.7365 | 0.8298 | 0.8978 | 0.9511 | 0.8079 | 0.8728 | 0.5490 | 0.9148 | 0.8696 | 0.8437 | 0.8813 | 0.6823 | 0.7759 | 0.4464 | 0.8303 | 0.7331 | 0.8059 | | 0.1294 | 25.95 | 9600 | 0.5026 | 0.7351 | 0.8264 | 0.8978 | 0.9497 | 0.8114 | 0.8611 | 0.5264 | 0.9206 | 0.8689 | 0.8466 | 0.8814 | 0.6857 | 0.7719 | 0.4366 | 0.8311 | 0.7312 | 0.8075 | | 0.1027 | 26.0 | 9620 | 0.5231 | 0.7341 | 0.8257 | 0.8976 | 0.9492 | 0.7996 | 0.8804 | 0.5122 | 0.9146 | 0.8743 | 0.8493 | 0.8812 | 0.6800 | 0.7725 | 0.4334 | 0.8300 | 0.7339 | 0.8074 | | 0.0546 | 26.05 | 9640 | 0.5029 | 0.7376 | 0.8316 | 0.8984 | 0.9471 | 0.8256 | 0.8740 | 0.5308 | 0.9159 | 0.8789 | 0.8492 | 0.8811 | 0.6865 | 0.7731 | 0.4460 | 0.8318 | 0.7365 | 0.8080 | | 0.0587 | 26.11 | 9660 | 0.5070 | 0.7373 | 0.8302 | 0.8988 | 0.9468 | 0.8372 | 0.8647 | 0.5219 | 0.9237 | 0.8715 | 0.8453 | 0.8818 | 0.6877 | 0.7727 | 0.4414 | 0.8334 | 0.7372 | 0.8072 | | 0.1074 | 26.16 | 9680 | 0.5161 | 0.7367 | 0.8284 | 0.8985 | 0.9501 | 0.8170 | 0.8809 | 0.5162 | 0.9160 | 0.8729 | 0.8460 | 0.8818 | 0.6870 | 0.7750 | 0.4401 | 0.8320 | 0.7341 | 0.8071 | | 0.1067 | 26.22 | 9700 | 0.5173 | 0.7376 | 0.8322 | 0.8984 | 0.9481 | 0.8180 | 0.8779 | 0.5397 | 0.9136 | 0.8810 | 0.8473 | 0.8821 | 0.6848 | 0.7745 | 0.4487 | 0.8319 | 0.7339 | 0.8071 | | 0.1986 | 26.27 | 9720 | 0.5081 | 0.7350 | 0.8281 | 0.8976 | 0.9482 | 0.8121 | 0.8605 | 0.5314 | 0.9178 | 0.8766 | 0.8499 | 0.8815 | 0.6832 | 0.7678 | 0.4463 | 0.8327 | 0.7329 | 0.8009 | | 0.1617 | 26.32 | 9740 | 0.5211 | 0.7385 | 0.8321 | 0.8988 | 0.9521 | 0.8121 | 0.8831 | 0.5440 | 0.9110 | 0.8767 | 0.8458 | 0.8827 | 0.6871 | 0.7760 | 0.4498 | 0.8324 | 0.7343 | 0.8072 | | 0.0795 | 26.38 | 9760 | 0.5301 | 0.7383 | 0.8341 | 0.8987 | 0.9469 | 0.8397 | 0.8710 | 0.5431 | 0.9190 | 0.8748 | 0.8438 | 0.8834 | 0.6844 | 0.7765 | 0.4521 | 0.8324 | 0.7333 | 0.8060 | | 0.1803 | 26.43 | 9780 | 0.5257 | 0.7361 | 0.8295 | 0.8975 | 0.9495 | 0.8117 | 0.8800 | 0.5346 | 0.9147 | 0.8800 | 0.8361 | 0.8829 | 0.6848 | 0.7753 | 0.4501 | 0.8293 | 0.7337 | 0.7966 | | 0.0867 | 26.49 | 9800 | 0.5289 | 0.7328 | 0.8277 | 0.8952 | 0.9472 | 0.8110 | 0.8788 | 0.5548 | 0.9198 | 0.8596 | 0.8224 | 0.8819 | 0.6828 | 0.7728 | 0.4491 | 0.8248 | 0.7345 | 0.7838 | | 0.1211 | 26.54 | 9820 | 0.5161 | 0.7339 | 0.8285 | 0.8959 | 0.9488 | 0.8200 | 0.8818 | 0.5393 | 0.9150 | 0.8626 | 0.8317 | 0.8805 | 0.6848 | 0.7715 | 0.4479 | 0.8264 | 0.7329 | 0.7934 | | 0.1379 | 26.59 | 9840 | 0.5011 | 0.7326 | 0.8262 | 0.8952 | 0.9479 | 0.8231 | 0.8711 | 0.5379 | 0.9216 | 0.8606 | 0.8210 | 0.8812 | 0.6865 | 0.7724 | 0.4467 | 0.8245 | 0.7352 | 0.7818 | | 0.1061 | 26.65 | 9860 | 0.5189 | 0.7372 | 0.8294 | 0.8982 | 0.9471 | 0.8104 | 0.8816 | 0.5375 | 0.9204 | 0.8652 | 0.8436 | 0.8816 | 0.6863 | 0.7703 | 0.4495 | 0.8323 | 0.7355 | 0.8048 | | 0.139 | 26.7 | 9880 | 0.4868 | 0.7371 | 0.8308 | 0.8982 | 0.9514 | 0.8217 | 0.8705 | 0.5445 | 0.9155 | 0.8650 | 0.8466 | 0.8808 | 0.6873 | 0.7666 | 0.4478 | 0.8342 | 0.7388 | 0.8041 | | 0.1702 | 26.76 | 9900 | 0.5116 | 0.7351 | 0.8251 | 0.8973 | 0.9502 | 0.7995 | 0.8598 | 0.5465 | 0.9247 | 0.8526 | 0.8424 | 0.8799 | 0.6828 | 0.7610 | 0.4553 | 0.8347 | 0.7375 | 0.7943 | | 0.1081 | 26.81 | 9920 | 0.5044 | 0.7349 | 0.8282 | 0.8969 | 0.9419 | 0.7919 | 0.8615 | 0.5540 | 0.9228 | 0.8725 | 0.8531 | 0.8808 | 0.6802 | 0.7617 | 0.4570 | 0.8337 | 0.7391 | 0.7916 | | 0.1345 | 26.86 | 9940 | 0.4901 | 0.7391 | 0.8286 | 0.8994 | 0.9501 | 0.8222 | 0.8787 | 0.5168 | 0.9210 | 0.8635 | 0.8475 | 0.8815 | 0.6888 | 0.7741 | 0.4503 | 0.8340 | 0.7379 | 0.8070 | | 0.114 | 26.92 | 9960 | 0.4875 | 0.7385 | 0.8271 | 0.8992 | 0.9513 | 0.7924 | 0.8798 | 0.5379 | 0.9220 | 0.8652 | 0.8414 | 0.8826 | 0.6847 | 0.7763 | 0.4510 | 0.8333 | 0.7376 | 0.8044 | | 0.1922 | 26.97 | 9980 | 0.4993 | 0.7388 | 0.8329 | 0.8991 | 0.9487 | 0.8103 | 0.8764 | 0.5558 | 0.9176 | 0.8760 | 0.8456 | 0.8837 | 0.6851 | 0.7752 | 0.4488 | 0.8334 | 0.7387 | 0.8068 | | 0.1303 | 27.03 | 10000 | 0.5065 | 0.7384 | 0.8337 | 0.8989 | 0.9510 | 0.8211 | 0.8756 | 0.5511 | 0.9129 | 0.8800 | 0.8443 | 0.8826 | 0.6868 | 0.7757 | 0.4441 | 0.8333 | 0.7395 | 0.8065 | | 0.0619 | 27.08 | 10020 | 0.5091 | 0.7391 | 0.8296 | 0.8999 | 0.9461 | 0.8242 | 0.8800 | 0.5231 | 0.9290 | 0.8626 | 0.8424 | 0.8833 | 0.6889 | 0.7785 | 0.4408 | 0.8354 | 0.7414 | 0.8058 | | 0.0805 | 27.14 | 10040 | 0.5210 | 0.7397 | 0.8326 | 0.8997 | 0.9500 | 0.8180 | 0.8742 | 0.5458 | 0.9189 | 0.8760 | 0.8453 | 0.8835 | 0.6883 | 0.7763 | 0.4459 | 0.8348 | 0.7412 | 0.8076 | | 0.2213 | 27.19 | 10060 | 0.5177 | 0.7379 | 0.8293 | 0.8988 | 0.9537 | 0.8017 | 0.8765 | 0.5352 | 0.9113 | 0.8780 | 0.8485 | 0.8819 | 0.6866 | 0.7755 | 0.4449 | 0.8322 | 0.7365 | 0.8075 | | 0.1811 | 27.24 | 10080 | 0.5258 | 0.7398 | 0.8333 | 0.8990 | 0.9490 | 0.8217 | 0.8714 | 0.5409 | 0.9137 | 0.8881 | 0.8484 | 0.8839 | 0.6890 | 0.7756 | 0.4572 | 0.8306 | 0.7338 | 0.8081 | | 0.6655 | 27.3 | 10100 | 0.5268 | 0.7389 | 0.8316 | 0.8987 | 0.9495 | 0.8248 | 0.8726 | 0.5327 | 0.9156 | 0.8809 | 0.8452 | 0.8842 | 0.6880 | 0.7746 | 0.4548 | 0.8293 | 0.7341 | 0.8071 | | 0.1579 | 27.35 | 10120 | 0.5147 | 0.7381 | 0.8296 | 0.8987 | 0.9483 | 0.8243 | 0.8675 | 0.5196 | 0.9187 | 0.8809 | 0.8480 | 0.8840 | 0.6876 | 0.7736 | 0.4492 | 0.8295 | 0.7355 | 0.8071 | | 0.5273 | 27.41 | 10140 | 0.5186 | 0.7369 | 0.8294 | 0.8977 | 0.9473 | 0.8009 | 0.8701 | 0.5472 | 0.9178 | 0.8754 | 0.8471 | 0.8834 | 0.6814 | 0.7726 | 0.4553 | 0.8276 | 0.7322 | 0.8060 | | 0.1196 | 27.46 | 10160 | 0.5318 | 0.7356 | 0.8309 | 0.8970 | 0.9497 | 0.7965 | 0.8818 | 0.5611 | 0.9106 | 0.8761 | 0.8403 | 0.8821 | 0.6788 | 0.7703 | 0.4550 | 0.8280 | 0.7308 | 0.8045 | | 0.108 | 27.51 | 10180 | 0.5292 | 0.7354 | 0.8273 | 0.8971 | 0.9462 | 0.7953 | 0.8793 | 0.5411 | 0.9200 | 0.8675 | 0.8417 | 0.8826 | 0.6789 | 0.7699 | 0.4533 | 0.8272 | 0.7306 | 0.8050 | | 0.1212 | 27.57 | 10200 | 0.5117 | 0.7372 | 0.8316 | 0.8975 | 0.9457 | 0.8175 | 0.8854 | 0.5309 | 0.9114 | 0.8811 | 0.8494 | 0.8808 | 0.6855 | 0.7677 | 0.4564 | 0.8293 | 0.7334 | 0.8072 | | 0.0762 | 27.62 | 10220 | 0.5245 | 0.7373 | 0.8284 | 0.8980 | 0.9487 | 0.8041 | 0.8775 | 0.5279 | 0.9154 | 0.8784 | 0.8470 | 0.8815 | 0.6849 | 0.7697 | 0.4555 | 0.8304 | 0.7328 | 0.8065 | | 0.1431 | 27.68 | 10240 | 0.5074 | 0.7373 | 0.8373 | 0.8972 | 0.9448 | 0.8280 | 0.8723 | 0.5691 | 0.9080 | 0.8889 | 0.8503 | 0.8807 | 0.6861 | 0.7689 | 0.4562 | 0.8304 | 0.7309 | 0.8080 | | 0.0977 | 27.73 | 10260 | 0.4905 | 0.7374 | 0.8312 | 0.8980 | 0.9463 | 0.8013 | 0.8798 | 0.5569 | 0.9174 | 0.8681 | 0.8490 | 0.8806 | 0.6843 | 0.7701 | 0.4504 | 0.8324 | 0.7355 | 0.8085 | | 0.0907 | 27.78 | 10280 | 0.5180 | 0.7376 | 0.8284 | 0.8987 | 0.9501 | 0.8142 | 0.8802 | 0.5167 | 0.9164 | 0.8756 | 0.8457 | 0.8813 | 0.6881 | 0.7719 | 0.4447 | 0.8324 | 0.7380 | 0.8068 | | 0.0557 | 27.84 | 10300 | 0.5174 | 0.7372 | 0.8309 | 0.8982 | 0.9471 | 0.8304 | 0.8776 | 0.5271 | 0.9178 | 0.8689 | 0.8470 | 0.8810 | 0.6863 | 0.7715 | 0.4454 | 0.8315 | 0.7377 | 0.8069 | | 0.0937 | 27.89 | 10320 | 0.5085 | 0.7360 | 0.8260 | 0.8973 | 0.9505 | 0.8049 | 0.8751 | 0.5249 | 0.9183 | 0.8739 | 0.8343 | 0.8817 | 0.6877 | 0.7741 | 0.4474 | 0.8278 | 0.7395 | 0.7936 | | 0.228 | 27.95 | 10340 | 0.5105 | 0.7377 | 0.8325 | 0.8980 | 0.9485 | 0.8262 | 0.8819 | 0.5283 | 0.9106 | 0.8910 | 0.8410 | 0.8828 | 0.6890 | 0.7758 | 0.4509 | 0.8292 | 0.7344 | 0.8016 | | 0.0727 | 28.0 | 10360 | 0.5217 | 0.7350 | 0.8302 | 0.8961 | 0.9454 | 0.8205 | 0.8783 | 0.5401 | 0.9158 | 0.8810 | 0.8304 | 0.8824 | 0.6883 | 0.7751 | 0.4506 | 0.8248 | 0.7360 | 0.7879 | | 0.1394 | 28.05 | 10380 | 0.5540 | 0.7310 | 0.8262 | 0.8939 | 0.9478 | 0.8225 | 0.8790 | 0.5230 | 0.9133 | 0.8870 | 0.8109 | 0.8819 | 0.6884 | 0.7749 | 0.4443 | 0.8197 | 0.7351 | 0.7725 | | 0.1959 | 28.11 | 10400 | 0.5680 | 0.7282 | 0.8241 | 0.8919 | 0.9460 | 0.8195 | 0.8781 | 0.5323 | 0.9171 | 0.8811 | 0.7947 | 0.8815 | 0.6856 | 0.7757 | 0.4482 | 0.8162 | 0.7345 | 0.7559 | | 0.0682 | 28.16 | 10420 | 0.5632 | 0.7286 | 0.8259 | 0.8918 | 0.9475 | 0.8259 | 0.8832 | 0.5388 | 0.9135 | 0.8815 | 0.7907 | 0.8826 | 0.6877 | 0.7753 | 0.4524 | 0.8150 | 0.7329 | 0.7542 | | 0.1156 | 28.22 | 10440 | 0.5140 | 0.7347 | 0.8316 | 0.8957 | 0.9471 | 0.8257 | 0.8749 | 0.5484 | 0.9126 | 0.8868 | 0.8258 | 0.8833 | 0.6877 | 0.7757 | 0.4534 | 0.8238 | 0.7318 | 0.7870 | | 0.1874 | 28.27 | 10460 | 0.5246 | 0.7352 | 0.8281 | 0.8968 | 0.9506 | 0.8033 | 0.8802 | 0.5364 | 0.9131 | 0.8808 | 0.8322 | 0.8828 | 0.6850 | 0.7739 | 0.4491 | 0.8264 | 0.7351 | 0.7943 | | 0.1189 | 28.32 | 10480 | 0.5064 | 0.7375 | 0.8320 | 0.8976 | 0.9552 | 0.8172 | 0.8735 | 0.5404 | 0.9022 | 0.8901 | 0.8454 | 0.8809 | 0.6879 | 0.7758 | 0.4522 | 0.8283 | 0.7306 | 0.8069 | | 0.0943 | 28.38 | 10500 | 0.4890 | 0.7394 | 0.8322 | 0.8989 | 0.9503 | 0.8122 | 0.8806 | 0.5486 | 0.9140 | 0.8699 | 0.8497 | 0.8833 | 0.6874 | 0.7768 | 0.4554 | 0.8308 | 0.7340 | 0.8078 | | 0.0755 | 28.43 | 10520 | 0.5101 | 0.7389 | 0.8329 | 0.8984 | 0.9478 | 0.8138 | 0.8846 | 0.5489 | 0.9140 | 0.8745 | 0.8466 | 0.8833 | 0.6869 | 0.7765 | 0.4561 | 0.8297 | 0.7348 | 0.8050 | | 0.0791 | 28.49 | 10540 | 0.5065 | 0.7374 | 0.8293 | 0.8976 | 0.9479 | 0.8110 | 0.8706 | 0.5481 | 0.9208 | 0.8684 | 0.8386 | 0.8828 | 0.6881 | 0.7749 | 0.4556 | 0.8284 | 0.7349 | 0.7970 | | 0.1102 | 28.54 | 10560 | 0.5250 | 0.7382 | 0.8308 | 0.8984 | 0.9456 | 0.8251 | 0.8808 | 0.5276 | 0.9201 | 0.8731 | 0.8432 | 0.8823 | 0.6884 | 0.7736 | 0.4523 | 0.8301 | 0.7352 | 0.8058 | | 0.0966 | 28.59 | 10580 | 0.5121 | 0.7383 | 0.8300 | 0.8988 | 0.9495 | 0.8170 | 0.8716 | 0.5289 | 0.9173 | 0.8790 | 0.8467 | 0.8828 | 0.6866 | 0.7739 | 0.4510 | 0.8313 | 0.7361 | 0.8067 | | 0.0983 | 28.65 | 10600 | 0.5138 | 0.7373 | 0.8314 | 0.8979 | 0.9517 | 0.8232 | 0.8747 | 0.5292 | 0.9075 | 0.8836 | 0.8498 | 0.8825 | 0.6879 | 0.7754 | 0.4499 | 0.8285 | 0.7291 | 0.8079 | | 0.1172 | 28.7 | 10620 | 0.5070 | 0.7353 | 0.8305 | 0.8965 | 0.9486 | 0.8033 | 0.8627 | 0.5559 | 0.9095 | 0.8822 | 0.8511 | 0.8816 | 0.6849 | 0.7678 | 0.4554 | 0.8281 | 0.7283 | 0.8007 | | 0.0848 | 28.76 | 10640 | 0.5343 | 0.7349 | 0.8277 | 0.8966 | 0.9473 | 0.8163 | 0.8639 | 0.5377 | 0.9189 | 0.8646 | 0.8449 | 0.8804 | 0.6878 | 0.7648 | 0.4513 | 0.8291 | 0.7315 | 0.7992 | | 0.0883 | 28.81 | 10660 | 0.5272 | 0.7328 | 0.8273 | 0.8958 | 0.9458 | 0.8120 | 0.8575 | 0.5372 | 0.9160 | 0.8721 | 0.8501 | 0.8804 | 0.6831 | 0.7607 | 0.4502 | 0.8295 | 0.7346 | 0.7910 | | 0.0658 | 28.86 | 10680 | 0.4960 | 0.7376 | 0.8344 | 0.8975 | 0.9476 | 0.8178 | 0.8731 | 0.5653 | 0.9108 | 0.8765 | 0.8496 | 0.8823 | 0.6899 | 0.7722 | 0.4532 | 0.8293 | 0.7336 | 0.8029 | | 0.1813 | 28.92 | 10700 | 0.4870 | 0.7373 | 0.8351 | 0.8972 | 0.9463 | 0.8145 | 0.8781 | 0.5677 | 0.9079 | 0.8770 | 0.8539 | 0.8821 | 0.6908 | 0.7763 | 0.4499 | 0.8282 | 0.7317 | 0.8023 | | 0.0801 | 28.97 | 10720 | 0.5059 | 0.7395 | 0.8358 | 0.8987 | 0.9490 | 0.8290 | 0.8881 | 0.5531 | 0.9100 | 0.8734 | 0.8478 | 0.8833 | 0.6922 | 0.7757 | 0.4509 | 0.8306 | 0.7363 | 0.8072 | | 0.1161 | 29.03 | 10740 | 0.4992 | 0.7385 | 0.8267 | 0.8996 | 0.9506 | 0.8039 | 0.8764 | 0.5176 | 0.9227 | 0.8709 | 0.8448 | 0.8834 | 0.6889 | 0.7763 | 0.4424 | 0.8332 | 0.7387 | 0.8068 | | 0.102 | 29.08 | 10760 | 0.4972 | 0.7395 | 0.8333 | 0.8990 | 0.9481 | 0.8239 | 0.8832 | 0.5419 | 0.9151 | 0.8723 | 0.8489 | 0.8835 | 0.6906 | 0.7747 | 0.4518 | 0.8309 | 0.7373 | 0.8080 | | 0.0575 | 29.14 | 10780 | 0.4989 | 0.7384 | 0.8356 | 0.8978 | 0.9496 | 0.8199 | 0.8850 | 0.5567 | 0.9047 | 0.8864 | 0.8469 | 0.8826 | 0.6898 | 0.7748 | 0.4529 | 0.8281 | 0.7331 | 0.8076 | | 0.2291 | 29.19 | 10800 | 0.5068 | 0.7395 | 0.8337 | 0.8989 | 0.9509 | 0.8265 | 0.8804 | 0.5405 | 0.9111 | 0.8807 | 0.8460 | 0.8827 | 0.6904 | 0.7779 | 0.4512 | 0.8310 | 0.7357 | 0.8079 | | 0.1433 | 29.24 | 10820 | 0.5152 | 0.7379 | 0.8321 | 0.8982 | 0.9523 | 0.8245 | 0.8827 | 0.5227 | 0.9048 | 0.8912 | 0.8469 | 0.8821 | 0.6901 | 0.7780 | 0.4470 | 0.8290 | 0.7316 | 0.8077 | | 0.108 | 29.3 | 10840 | 0.5099 | 0.7376 | 0.8301 | 0.8985 | 0.9514 | 0.8171 | 0.8771 | 0.5298 | 0.9138 | 0.8765 | 0.8452 | 0.8829 | 0.6880 | 0.7771 | 0.4444 | 0.8303 | 0.7334 | 0.8075 | | 0.0483 | 29.35 | 10860 | 0.5098 | 0.7386 | 0.8324 | 0.8989 | 0.9517 | 0.8164 | 0.8801 | 0.5440 | 0.9122 | 0.8764 | 0.8463 | 0.8838 | 0.6881 | 0.7783 | 0.4459 | 0.8313 | 0.7351 | 0.8077 | | 0.1346 | 29.41 | 10880 | 0.5062 | 0.7387 | 0.8303 | 0.8992 | 0.9506 | 0.8198 | 0.8838 | 0.5155 | 0.9127 | 0.8792 | 0.8506 | 0.8827 | 0.6920 | 0.7781 | 0.4410 | 0.8315 | 0.7375 | 0.8078 | | 0.0519 | 29.46 | 10900 | 0.5033 | 0.7385 | 0.8364 | 0.8982 | 0.9490 | 0.8246 | 0.8848 | 0.5554 | 0.9049 | 0.8846 | 0.8515 | 0.8829 | 0.6900 | 0.7758 | 0.4475 | 0.8291 | 0.7359 | 0.8085 | | 0.0887 | 29.51 | 10920 | 0.4944 | 0.7391 | 0.8346 | 0.8988 | 0.9458 | 0.8139 | 0.8788 | 0.5670 | 0.9165 | 0.8630 | 0.8573 | 0.8828 | 0.6891 | 0.7742 | 0.4471 | 0.8323 | 0.7388 | 0.8096 | | 0.0459 | 29.57 | 10940 | 0.5146 | 0.7377 | 0.8318 | 0.8981 | 0.9517 | 0.7884 | 0.8793 | 0.5671 | 0.9088 | 0.8766 | 0.8504 | 0.8824 | 0.6827 | 0.7711 | 0.4512 | 0.8303 | 0.7370 | 0.8091 | | 0.1426 | 29.62 | 10960 | 0.5061 | 0.7391 | 0.8341 | 0.8988 | 0.9486 | 0.8155 | 0.8785 | 0.5586 | 0.9139 | 0.8752 | 0.8485 | 0.8831 | 0.6862 | 0.7744 | 0.4548 | 0.8319 | 0.7346 | 0.8086 | | 0.1005 | 29.68 | 10980 | 0.5043 | 0.7394 | 0.8359 | 0.8989 | 0.9454 | 0.8271 | 0.8817 | 0.5554 | 0.9151 | 0.8744 | 0.8519 | 0.8828 | 0.6846 | 0.7763 | 0.4548 | 0.8325 | 0.7352 | 0.8093 | | 0.0813 | 29.73 | 11000 | 0.5017 | 0.7388 | 0.8340 | 0.8990 | 0.9493 | 0.8224 | 0.8803 | 0.5501 | 0.9145 | 0.8769 | 0.8447 | 0.8832 | 0.6869 | 0.7761 | 0.4513 | 0.8335 | 0.7339 | 0.8067 | | 0.097 | 29.78 | 11020 | 0.5056 | 0.7388 | 0.8330 | 0.8989 | 0.9509 | 0.8049 | 0.8786 | 0.5613 | 0.9136 | 0.8766 | 0.8450 | 0.8837 | 0.6860 | 0.7753 | 0.4515 | 0.8325 | 0.7349 | 0.8076 | | 0.1054 | 29.84 | 11040 | 0.4909 | 0.7394 | 0.8345 | 0.8990 | 0.9499 | 0.8158 | 0.8694 | 0.5638 | 0.9135 | 0.8773 | 0.8518 | 0.8833 | 0.6873 | 0.7743 | 0.4534 | 0.8327 | 0.7337 | 0.8112 | | 0.0929 | 29.89 | 11060 | 0.4900 | 0.7388 | 0.8349 | 0.8985 | 0.9485 | 0.8152 | 0.8774 | 0.5662 | 0.9119 | 0.8715 | 0.8536 | 0.8830 | 0.6888 | 0.7755 | 0.4492 | 0.8310 | 0.7317 | 0.8124 | | 0.1017 | 29.95 | 11080 | 0.4941 | 0.7379 | 0.8332 | 0.8981 | 0.9515 | 0.7969 | 0.8751 | 0.5676 | 0.9073 | 0.8831 | 0.8506 | 0.8838 | 0.6865 | 0.7753 | 0.4513 | 0.8294 | 0.7288 | 0.8102 | | 0.2887 | 30.0 | 11100 | 0.5168 | 0.7367 | 0.8276 | 0.8984 | 0.9523 | 0.8127 | 0.8781 | 0.5245 | 0.9167 | 0.8651 | 0.8438 | 0.8826 | 0.6891 | 0.7747 | 0.4405 | 0.8310 | 0.7315 | 0.8073 | | 0.1735 | 30.05 | 11120 | 0.5168 | 0.7373 | 0.8282 | 0.8986 | 0.9470 | 0.8124 | 0.8744 | 0.5265 | 0.9226 | 0.8662 | 0.8484 | 0.8836 | 0.6888 | 0.7748 | 0.4427 | 0.8308 | 0.7317 | 0.8087 | | 0.1297 | 30.11 | 11140 | 0.4993 | 0.7368 | 0.8284 | 0.8985 | 0.9500 | 0.7898 | 0.8753 | 0.5373 | 0.9147 | 0.8815 | 0.8501 | 0.8840 | 0.6848 | 0.7736 | 0.4434 | 0.8311 | 0.7314 | 0.8091 | | 0.0236 | 30.16 | 11160 | 0.5068 | 0.7380 | 0.8331 | 0.8986 | 0.9495 | 0.8218 | 0.8759 | 0.5429 | 0.9121 | 0.8796 | 0.8498 | 0.8837 | 0.6900 | 0.7754 | 0.4446 | 0.8311 | 0.7325 | 0.8087 | | 0.076 | 30.22 | 11180 | 0.5090 | 0.7363 | 0.8370 | 0.8973 | 0.9483 | 0.8279 | 0.8831 | 0.5709 | 0.9057 | 0.8742 | 0.8487 | 0.8832 | 0.6875 | 0.7753 | 0.4391 | 0.8286 | 0.7319 | 0.8084 | | 0.0789 | 30.27 | 11200 | 0.5203 | 0.7369 | 0.8355 | 0.8975 | 0.9489 | 0.8181 | 0.8839 | 0.5596 | 0.9050 | 0.8864 | 0.8467 | 0.8833 | 0.6862 | 0.7751 | 0.4478 | 0.8286 | 0.7292 | 0.8081 | | 0.1242 | 30.32 | 11220 | 0.5284 | 0.7382 | 0.8357 | 0.8981 | 0.9463 | 0.8243 | 0.8691 | 0.5792 | 0.9172 | 0.8660 | 0.8479 | 0.8831 | 0.6856 | 0.7738 | 0.4558 | 0.8311 | 0.7300 | 0.8083 | | 0.061 | 30.38 | 11240 | 0.5051 | 0.7383 | 0.8313 | 0.8987 | 0.9498 | 0.7971 | 0.8681 | 0.5757 | 0.9201 | 0.8584 | 0.8496 | 0.8832 | 0.6847 | 0.7747 | 0.4523 | 0.8326 | 0.7321 | 0.8086 | | 0.0737 | 30.43 | 11260 | 0.5169 | 0.7388 | 0.8316 | 0.8989 | 0.9465 | 0.8152 | 0.8759 | 0.5519 | 0.9223 | 0.8617 | 0.8478 | 0.8827 | 0.6864 | 0.7736 | 0.4532 | 0.8327 | 0.7347 | 0.8083 | | 0.0789 | 30.49 | 11280 | 0.5029 | 0.7359 | 0.8330 | 0.8974 | 0.9464 | 0.8104 | 0.8656 | 0.5689 | 0.9150 | 0.8761 | 0.8489 | 0.8828 | 0.6851 | 0.7625 | 0.4521 | 0.8324 | 0.7345 | 0.8020 | | 0.0723 | 30.54 | 11300 | 0.4975 | 0.7363 | 0.8326 | 0.8975 | 0.9466 | 0.8122 | 0.8679 | 0.5629 | 0.9146 | 0.8731 | 0.8512 | 0.8820 | 0.6853 | 0.7654 | 0.4529 | 0.8320 | 0.7335 | 0.8033 | | 0.0934 | 30.59 | 11320 | 0.5015 | 0.7380 | 0.8351 | 0.8982 | 0.9452 | 0.8144 | 0.8751 | 0.5648 | 0.9149 | 0.8844 | 0.8469 | 0.8832 | 0.6853 | 0.7719 | 0.4531 | 0.8322 | 0.7335 | 0.8066 | | 0.1817 | 30.65 | 11340 | 0.4986 | 0.7374 | 0.8329 | 0.8980 | 0.9450 | 0.8080 | 0.8801 | 0.5540 | 0.9152 | 0.8806 | 0.8475 | 0.8827 | 0.6849 | 0.7719 | 0.4515 | 0.8305 | 0.7331 | 0.8072 | | 0.0949 | 30.7 | 11360 | 0.5022 | 0.7370 | 0.8321 | 0.8978 | 0.9469 | 0.8177 | 0.8678 | 0.5522 | 0.9156 | 0.8730 | 0.8519 | 0.8825 | 0.6862 | 0.7704 | 0.4498 | 0.8304 | 0.7316 | 0.8079 | | 0.0598 | 30.76 | 11380 | 0.5137 | 0.7355 | 0.8334 | 0.8967 | 0.9466 | 0.8195 | 0.8797 | 0.5513 | 0.9079 | 0.8799 | 0.8487 | 0.8810 | 0.6868 | 0.7694 | 0.4459 | 0.8280 | 0.7298 | 0.8076 | | 0.0862 | 30.81 | 11400 | 0.5024 | 0.7361 | 0.8323 | 0.8974 | 0.9450 | 0.8229 | 0.8762 | 0.5460 | 0.9152 | 0.8693 | 0.8512 | 0.8815 | 0.6877 | 0.7719 | 0.4438 | 0.8301 | 0.7310 | 0.8065 | | 0.0986 | 30.86 | 11420 | 0.4984 | 0.7368 | 0.8314 | 0.8982 | 0.9477 | 0.8202 | 0.8763 | 0.5405 | 0.9165 | 0.8707 | 0.8482 | 0.8827 | 0.6866 | 0.7743 | 0.4448 | 0.8320 | 0.7319 | 0.8057 | | 0.1547 | 30.92 | 11440 | 0.4873 | 0.7368 | 0.8292 | 0.8986 | 0.9499 | 0.8000 | 0.8805 | 0.5394 | 0.9164 | 0.8715 | 0.8469 | 0.8835 | 0.6826 | 0.7758 | 0.4449 | 0.8324 | 0.7319 | 0.8065 | | 0.0862 | 30.97 | 11460 | 0.4972 | 0.7367 | 0.8282 | 0.8985 | 0.9486 | 0.8141 | 0.8820 | 0.5210 | 0.9190 | 0.8664 | 0.8466 | 0.8825 | 0.6864 | 0.7747 | 0.4415 | 0.8319 | 0.7343 | 0.8055 | | 0.0682 | 31.03 | 11480 | 0.5034 | 0.7351 | 0.8300 | 0.8973 | 0.9477 | 0.8149 | 0.8659 | 0.5473 | 0.9164 | 0.8677 | 0.8500 | 0.8813 | 0.6870 | 0.7677 | 0.4425 | 0.8320 | 0.7335 | 0.8014 | | 0.1793 | 31.08 | 11500 | 0.5248 | 0.7319 | 0.8226 | 0.8969 | 0.9475 | 0.8239 | 0.8575 | 0.4909 | 0.9229 | 0.8654 | 0.8501 | 0.8808 | 0.6870 | 0.7657 | 0.4282 | 0.8318 | 0.7336 | 0.7964 | | 0.0667 | 31.14 | 11520 | 0.5120 | 0.7352 | 0.8295 | 0.8971 | 0.9477 | 0.8112 | 0.8644 | 0.5442 | 0.9152 | 0.8732 | 0.8508 | 0.8808 | 0.6864 | 0.7683 | 0.4458 | 0.8315 | 0.7320 | 0.8014 | | 0.0515 | 31.19 | 11540 | 0.5148 | 0.7356 | 0.8297 | 0.8971 | 0.9450 | 0.8034 | 0.8681 | 0.5526 | 0.9180 | 0.8703 | 0.8508 | 0.8808 | 0.6851 | 0.7687 | 0.4513 | 0.8313 | 0.7315 | 0.8006 | | 0.0918 | 31.24 | 11560 | 0.5018 | 0.7368 | 0.8314 | 0.8977 | 0.9470 | 0.8104 | 0.8627 | 0.5641 | 0.9187 | 0.8665 | 0.8507 | 0.8816 | 0.6864 | 0.7713 | 0.4510 | 0.8322 | 0.7330 | 0.8024 | | 0.0613 | 31.3 | 11580 | 0.5291 | 0.7375 | 0.8323 | 0.8979 | 0.9490 | 0.8056 | 0.8765 | 0.5574 | 0.9117 | 0.8774 | 0.8485 | 0.8820 | 0.6860 | 0.7743 | 0.4521 | 0.8304 | 0.7317 | 0.8063 | | 0.3483 | 31.35 | 11600 | 0.5338 | 0.7363 | 0.8296 | 0.8974 | 0.9506 | 0.8040 | 0.8788 | 0.5517 | 0.9130 | 0.8604 | 0.8488 | 0.8808 | 0.6839 | 0.7724 | 0.4504 | 0.8296 | 0.7306 | 0.8066 | | 0.0948 | 31.41 | 11620 | 0.5038 | 0.7373 | 0.8361 | 0.8974 | 0.9480 | 0.8265 | 0.8796 | 0.5653 | 0.9071 | 0.8763 | 0.8496 | 0.8816 | 0.6865 | 0.7735 | 0.4515 | 0.8294 | 0.7318 | 0.8068 | | 0.1159 | 31.46 | 11640 | 0.5139 | 0.7364 | 0.8335 | 0.8973 | 0.9460 | 0.8170 | 0.8755 | 0.5700 | 0.9169 | 0.8668 | 0.8422 | 0.8823 | 0.6847 | 0.7736 | 0.4488 | 0.8301 | 0.7336 | 0.8015 | | 0.0652 | 31.51 | 11660 | 0.5207 | 0.7370 | 0.8320 | 0.8979 | 0.9482 | 0.8074 | 0.8741 | 0.5614 | 0.9158 | 0.8722 | 0.8452 | 0.8822 | 0.6813 | 0.7709 | 0.4528 | 0.8312 | 0.7342 | 0.8066 | | 0.0718 | 31.57 | 11680 | 0.5304 | 0.7388 | 0.8330 | 0.8989 | 0.9479 | 0.8210 | 0.8744 | 0.5500 | 0.9178 | 0.8721 | 0.8478 | 0.8829 | 0.6836 | 0.7745 | 0.4565 | 0.8330 | 0.7345 | 0.8063 | | 0.0911 | 31.62 | 11700 | 0.5200 | 0.7388 | 0.8323 | 0.8989 | 0.9500 | 0.8152 | 0.8838 | 0.5456 | 0.9143 | 0.8702 | 0.8470 | 0.8827 | 0.6847 | 0.7758 | 0.4549 | 0.8326 | 0.7348 | 0.8061 | | 0.484 | 31.68 | 11720 | 0.5312 | 0.7382 | 0.8305 | 0.8986 | 0.9493 | 0.8051 | 0.8792 | 0.5463 | 0.9174 | 0.8705 | 0.8454 | 0.8826 | 0.6829 | 0.7749 | 0.4560 | 0.8324 | 0.7327 | 0.8058 | | 0.0859 | 31.73 | 11740 | 0.5135 | 0.7387 | 0.8314 | 0.8988 | 0.9497 | 0.8054 | 0.8717 | 0.5569 | 0.9180 | 0.8701 | 0.8478 | 0.8829 | 0.6835 | 0.7754 | 0.4571 | 0.8335 | 0.7329 | 0.8053 | | 0.1524 | 31.78 | 11760 | 0.5221 | 0.7377 | 0.8322 | 0.8982 | 0.9487 | 0.8046 | 0.8739 | 0.5673 | 0.9166 | 0.8677 | 0.8463 | 0.8824 | 0.6832 | 0.7739 | 0.4554 | 0.8322 | 0.7317 | 0.8054 | | 0.075 | 31.84 | 11780 | 0.5247 | 0.7370 | 0.8360 | 0.8976 | 0.9466 | 0.8143 | 0.8765 | 0.5767 | 0.9104 | 0.8799 | 0.8476 | 0.8829 | 0.6849 | 0.7725 | 0.4518 | 0.8310 | 0.7311 | 0.8051 | | 0.0929 | 31.89 | 11800 | 0.5292 | 0.7383 | 0.8321 | 0.8986 | 0.9496 | 0.8164 | 0.8774 | 0.5418 | 0.9140 | 0.8806 | 0.8448 | 0.8830 | 0.6860 | 0.7748 | 0.4547 | 0.8318 | 0.7313 | 0.8066 | | 0.0987 | 31.95 | 11820 | 0.5292 | 0.7385 | 0.8301 | 0.8991 | 0.9499 | 0.8254 | 0.8856 | 0.5263 | 0.9181 | 0.8566 | 0.8489 | 0.8823 | 0.6877 | 0.7756 | 0.4504 | 0.8332 | 0.7337 | 0.8067 | | 0.092 | 32.0 | 11840 | 0.5141 | 0.7386 | 0.8314 | 0.8989 | 0.9479 | 0.8123 | 0.8734 | 0.5431 | 0.9182 | 0.8757 | 0.8492 | 0.8837 | 0.6864 | 0.7745 | 0.4524 | 0.8324 | 0.7335 | 0.8071 | | 0.0866 | 32.05 | 11860 | 0.5429 | 0.7385 | 0.8325 | 0.8988 | 0.9493 | 0.8134 | 0.8796 | 0.5541 | 0.9169 | 0.8685 | 0.8454 | 0.8833 | 0.6860 | 0.7754 | 0.4529 | 0.8329 | 0.7325 | 0.8068 | | 0.8609 | 32.11 | 11880 | 0.5457 | 0.7389 | 0.8313 | 0.8992 | 0.9496 | 0.8079 | 0.8842 | 0.5399 | 0.9155 | 0.8752 | 0.8471 | 0.8837 | 0.6855 | 0.7761 | 0.4539 | 0.8330 | 0.7334 | 0.8068 | | 0.1286 | 32.16 | 11900 | 0.5281 | 0.7385 | 0.8323 | 0.8989 | 0.9481 | 0.8075 | 0.8879 | 0.5454 | 0.9146 | 0.8732 | 0.8492 | 0.8830 | 0.6843 | 0.7752 | 0.4527 | 0.8332 | 0.7341 | 0.8072 | | 0.1147 | 32.22 | 11920 | 0.5275 | 0.7381 | 0.8342 | 0.8986 | 0.9481 | 0.8173 | 0.8830 | 0.5510 | 0.9118 | 0.8802 | 0.8482 | 0.8830 | 0.6849 | 0.7782 | 0.4497 | 0.8323 | 0.7323 | 0.8063 | | 0.1951 | 32.27 | 11940 | 0.5316 | 0.7379 | 0.8298 | 0.8991 | 0.9505 | 0.7977 | 0.8824 | 0.5455 | 0.9174 | 0.8686 | 0.8466 | 0.8835 | 0.6827 | 0.7800 | 0.4464 | 0.8333 | 0.7332 | 0.8061 | | 0.1721 | 32.32 | 11960 | 0.5186 | 0.7379 | 0.8341 | 0.8986 | 0.9483 | 0.8141 | 0.8829 | 0.5663 | 0.9159 | 0.8663 | 0.8450 | 0.8836 | 0.6850 | 0.7789 | 0.4453 | 0.8326 | 0.7333 | 0.8064 | | 0.1131 | 32.38 | 11980 | 0.5178 | 0.7383 | 0.8347 | 0.8984 | 0.9489 | 0.8125 | 0.8850 | 0.5634 | 0.9107 | 0.8740 | 0.8485 | 0.8832 | 0.6854 | 0.7791 | 0.4487 | 0.8312 | 0.7331 | 0.8073 | | 0.218 | 32.43 | 12000 | 0.5391 | 0.7388 | 0.8327 | 0.8989 | 0.9462 | 0.8218 | 0.8745 | 0.5469 | 0.9201 | 0.8694 | 0.8500 | 0.8829 | 0.6863 | 0.7780 | 0.4499 | 0.8326 | 0.7341 | 0.8077 | | 0.0878 | 32.49 | 12020 | 0.5231 | 0.7395 | 0.8335 | 0.8993 | 0.9499 | 0.8186 | 0.8790 | 0.5511 | 0.9154 | 0.8708 | 0.8495 | 0.8830 | 0.6860 | 0.7787 | 0.4514 | 0.8336 | 0.7359 | 0.8082 | | 0.0989 | 32.54 | 12040 | 0.5410 | 0.7377 | 0.8287 | 0.8992 | 0.9503 | 0.8194 | 0.8819 | 0.5153 | 0.9192 | 0.8709 | 0.8440 | 0.8824 | 0.6836 | 0.7788 | 0.4437 | 0.8337 | 0.7351 | 0.8068 | | 0.0789 | 32.59 | 12060 | 0.5251 | 0.7390 | 0.8325 | 0.8995 | 0.9509 | 0.8187 | 0.8797 | 0.5424 | 0.9159 | 0.8757 | 0.8443 | 0.8833 | 0.6852 | 0.7799 | 0.4474 | 0.8342 | 0.7362 | 0.8069 | | 0.0812 | 32.65 | 12080 | 0.5247 | 0.7380 | 0.8275 | 0.8993 | 0.9517 | 0.7944 | 0.8784 | 0.5329 | 0.9189 | 0.8689 | 0.8475 | 0.8827 | 0.6835 | 0.7795 | 0.4442 | 0.8333 | 0.7356 | 0.8074 | | 0.1758 | 32.7 | 12100 | 0.5203 | 0.7379 | 0.8310 | 0.8989 | 0.9502 | 0.8076 | 0.8869 | 0.5432 | 0.9154 | 0.8677 | 0.8459 | 0.8831 | 0.6860 | 0.7782 | 0.4431 | 0.8327 | 0.7356 | 0.8064 | | 1.11 | 32.76 | 12120 | 0.5273 | 0.7388 | 0.8313 | 0.8992 | 0.9483 | 0.8110 | 0.8816 | 0.5502 | 0.9211 | 0.8609 | 0.8463 | 0.8832 | 0.6859 | 0.7778 | 0.4480 | 0.8334 | 0.7366 | 0.8065 | | 0.1117 | 32.81 | 12140 | 0.5275 | 0.7380 | 0.8336 | 0.8985 | 0.9482 | 0.8261 | 0.8820 | 0.5498 | 0.9156 | 0.8694 | 0.8439 | 0.8827 | 0.6859 | 0.7779 | 0.4487 | 0.8320 | 0.7346 | 0.8043 | | 0.1276 | 32.86 | 12160 | 0.5291 | 0.7375 | 0.8299 | 0.8987 | 0.9501 | 0.8086 | 0.8883 | 0.5343 | 0.9151 | 0.8648 | 0.8480 | 0.8826 | 0.6849 | 0.7777 | 0.4448 | 0.8322 | 0.7336 | 0.8065 | | 1.0557 | 32.92 | 12180 | 0.5154 | 0.7376 | 0.8336 | 0.8985 | 0.9500 | 0.8272 | 0.8801 | 0.5476 | 0.9120 | 0.8697 | 0.8488 | 0.8825 | 0.6850 | 0.7781 | 0.4454 | 0.8326 | 0.7334 | 0.8060 | | 0.0903 | 32.97 | 12200 | 0.5077 | 0.7372 | 0.8313 | 0.8984 | 0.9491 | 0.8106 | 0.8829 | 0.5507 | 0.9171 | 0.8638 | 0.8445 | 0.8832 | 0.6846 | 0.7767 | 0.4457 | 0.8323 | 0.7331 | 0.8047 | | 0.083 | 33.03 | 12220 | 0.5192 | 0.7362 | 0.8287 | 0.8980 | 0.9502 | 0.8013 | 0.8836 | 0.5495 | 0.9186 | 0.8571 | 0.8407 | 0.8830 | 0.6833 | 0.7761 | 0.4458 | 0.8308 | 0.7331 | 0.8016 | | 0.1091 | 33.08 | 12240 | 0.5194 | 0.7358 | 0.8278 | 0.8975 | 0.9471 | 0.8053 | 0.8794 | 0.5445 | 0.9236 | 0.8589 | 0.8358 | 0.8838 | 0.6849 | 0.7767 | 0.4464 | 0.8287 | 0.7345 | 0.7956 | | 1.0202 | 33.14 | 12260 | 0.5309 | 0.7349 | 0.8287 | 0.8966 | 0.9498 | 0.8113 | 0.8817 | 0.5438 | 0.9162 | 0.8699 | 0.8283 | 0.8829 | 0.6850 | 0.7760 | 0.4487 | 0.8271 | 0.7337 | 0.7907 | | 0.1171 | 33.19 | 12280 | 0.5182 | 0.7354 | 0.8298 | 0.8969 | 0.9472 | 0.8068 | 0.8773 | 0.5545 | 0.9191 | 0.8741 | 0.8299 | 0.8836 | 0.6846 | 0.7757 | 0.4513 | 0.8278 | 0.7325 | 0.7925 | | 0.08 | 33.24 | 12300 | 0.5322 | 0.7374 | 0.8331 | 0.8979 | 0.9495 | 0.8102 | 0.8808 | 0.5709 | 0.9152 | 0.8655 | 0.8395 | 0.8835 | 0.6855 | 0.7771 | 0.4516 | 0.8307 | 0.7323 | 0.8009 | | 0.1076 | 33.3 | 12320 | 0.5310 | 0.7384 | 0.8321 | 0.8986 | 0.9518 | 0.8171 | 0.8812 | 0.5516 | 0.9143 | 0.8655 | 0.8433 | 0.8825 | 0.6867 | 0.7788 | 0.4511 | 0.8323 | 0.7331 | 0.8046 | | 0.0802 | 33.35 | 12340 | 0.5271 | 0.7390 | 0.8323 | 0.8991 | 0.9504 | 0.8175 | 0.8801 | 0.5471 | 0.9158 | 0.8673 | 0.8475 | 0.8825 | 0.6869 | 0.7782 | 0.4505 | 0.8333 | 0.7339 | 0.8076 | | 0.1013 | 33.41 | 12360 | 0.5283 | 0.7387 | 0.8311 | 0.8991 | 0.9506 | 0.8091 | 0.8774 | 0.5511 | 0.9182 | 0.8646 | 0.8466 | 0.8829 | 0.6861 | 0.7766 | 0.4504 | 0.8335 | 0.7341 | 0.8074 | | 0.0571 | 33.46 | 12380 | 0.5331 | 0.7389 | 0.8313 | 0.8994 | 0.9504 | 0.8154 | 0.8806 | 0.5420 | 0.9187 | 0.8676 | 0.8443 | 0.8832 | 0.6860 | 0.7767 | 0.4491 | 0.8344 | 0.7364 | 0.8064 | | 0.1864 | 33.51 | 12400 | 0.5320 | 0.7386 | 0.8314 | 0.8993 | 0.9483 | 0.8282 | 0.8744 | 0.5328 | 0.9216 | 0.8698 | 0.8447 | 0.8824 | 0.6848 | 0.7758 | 0.4484 | 0.8349 | 0.7368 | 0.8068 | | 0.1348 | 33.57 | 12420 | 0.5338 | 0.7369 | 0.8278 | 0.8989 | 0.9485 | 0.8161 | 0.8826 | 0.5116 | 0.9198 | 0.8674 | 0.8483 | 0.8822 | 0.6851 | 0.7767 | 0.4398 | 0.8327 | 0.7340 | 0.8080 | | 0.124 | 33.62 | 12440 | 0.5470 | 0.7370 | 0.8299 | 0.8982 | 0.9489 | 0.8061 | 0.8779 | 0.5421 | 0.9163 | 0.8721 | 0.8459 | 0.8823 | 0.6844 | 0.7746 | 0.4480 | 0.8312 | 0.7319 | 0.8066 | | 0.1407 | 33.68 | 12460 | 0.5247 | 0.7373 | 0.8302 | 0.8985 | 0.9474 | 0.8089 | 0.8794 | 0.5392 | 0.9182 | 0.8704 | 0.8481 | 0.8828 | 0.6848 | 0.7758 | 0.4464 | 0.8319 | 0.7322 | 0.8070 | | 0.0748 | 33.73 | 12480 | 0.5144 | 0.7375 | 0.8328 | 0.8985 | 0.9469 | 0.8202 | 0.8775 | 0.5509 | 0.9174 | 0.8683 | 0.8483 | 0.8830 | 0.6855 | 0.7758 | 0.4455 | 0.8320 | 0.7336 | 0.8073 | | 0.0599 | 33.78 | 12500 | 0.5234 | 0.7374 | 0.8318 | 0.8985 | 0.9471 | 0.8159 | 0.8779 | 0.5543 | 0.9203 | 0.8610 | 0.8458 | 0.8832 | 0.6853 | 0.7767 | 0.4439 | 0.8317 | 0.7359 | 0.8055 | | 0.1267 | 33.84 | 12520 | 0.5111 | 0.7387 | 0.8323 | 0.8990 | 0.9483 | 0.8146 | 0.8782 | 0.5473 | 0.9176 | 0.8719 | 0.8483 | 0.8835 | 0.6861 | 0.7770 | 0.4484 | 0.8326 | 0.7363 | 0.8070 | | 0.1081 | 33.89 | 12540 | 0.5096 | 0.7391 | 0.8335 | 0.8991 | 0.9476 | 0.8171 | 0.8809 | 0.5545 | 0.9173 | 0.8679 | 0.8495 | 0.8833 | 0.6862 | 0.7770 | 0.4499 | 0.8328 | 0.7367 | 0.8077 | | 0.1534 | 33.95 | 12560 | 0.5197 | 0.7377 | 0.8291 | 0.8986 | 0.9490 | 0.8124 | 0.8839 | 0.5406 | 0.9219 | 0.8527 | 0.8437 | 0.8823 | 0.6861 | 0.7761 | 0.4466 | 0.8322 | 0.7373 | 0.8033 | | 0.0883 | 34.0 | 12580 | 0.5253 | 0.7380 | 0.8304 | 0.8986 | 0.9491 | 0.8131 | 0.8832 | 0.5412 | 0.9174 | 0.8600 | 0.8492 | 0.8824 | 0.6867 | 0.7744 | 0.4487 | 0.8319 | 0.7355 | 0.8066 | | 0.1357 | 34.05 | 12600 | 0.5270 | 0.7375 | 0.8304 | 0.8984 | 0.9488 | 0.8137 | 0.8783 | 0.5358 | 0.9153 | 0.8710 | 0.8498 | 0.8820 | 0.6868 | 0.7739 | 0.4469 | 0.8312 | 0.7349 | 0.8067 | | 0.1752 | 34.11 | 12620 | 0.5359 | 0.7350 | 0.8241 | 0.8982 | 0.9464 | 0.8085 | 0.8800 | 0.4924 | 0.9229 | 0.8707 | 0.8476 | 0.8817 | 0.6847 | 0.7727 | 0.4341 | 0.8311 | 0.7352 | 0.8055 | | 0.0891 | 34.16 | 12640 | 0.5250 | 0.7365 | 0.8273 | 0.8986 | 0.9484 | 0.8102 | 0.8824 | 0.5146 | 0.9191 | 0.8681 | 0.8485 | 0.8821 | 0.6840 | 0.7737 | 0.4401 | 0.8321 | 0.7368 | 0.8063 | | 0.1144 | 34.22 | 12660 | 0.5314 | 0.7360 | 0.8280 | 0.8982 | 0.9497 | 0.8136 | 0.8815 | 0.5158 | 0.9162 | 0.8780 | 0.8412 | 0.8826 | 0.6855 | 0.7738 | 0.4409 | 0.8311 | 0.7352 | 0.8032 | | 0.054 | 34.27 | 12680 | 0.5128 | 0.7381 | 0.8298 | 0.8990 | 0.9500 | 0.8058 | 0.8807 | 0.5305 | 0.9155 | 0.8775 | 0.8484 | 0.8828 | 0.6855 | 0.7761 | 0.4444 | 0.8322 | 0.7380 | 0.8077 | | 0.0874 | 34.32 | 12700 | 0.5174 | 0.7389 | 0.8319 | 0.8990 | 0.9486 | 0.8173 | 0.8792 | 0.5361 | 0.9155 | 0.8763 | 0.8503 | 0.8824 | 0.6873 | 0.7760 | 0.4481 | 0.8322 | 0.7385 | 0.8081 | | 0.1039 | 34.38 | 12720 | 0.5245 | 0.7377 | 0.8303 | 0.8987 | 0.9482 | 0.8039 | 0.8808 | 0.5392 | 0.9175 | 0.8775 | 0.8448 | 0.8827 | 0.6845 | 0.7738 | 0.4461 | 0.8321 | 0.7387 | 0.8063 | | 0.0463 | 34.43 | 12740 | 0.5218 | 0.7380 | 0.8333 | 0.8987 | 0.9475 | 0.8271 | 0.8760 | 0.5472 | 0.9181 | 0.8730 | 0.8441 | 0.8825 | 0.6857 | 0.7744 | 0.4457 | 0.8333 | 0.7380 | 0.8061 | | 0.1081 | 34.49 | 12760 | 0.5196 | 0.7375 | 0.8297 | 0.8988 | 0.9497 | 0.8073 | 0.8747 | 0.5416 | 0.9195 | 0.8703 | 0.8446 | 0.8824 | 0.6847 | 0.7738 | 0.4437 | 0.8338 | 0.7376 | 0.8064 | | 0.081 | 34.54 | 12780 | 0.5125 | 0.7370 | 0.8293 | 0.8986 | 0.9478 | 0.8050 | 0.8777 | 0.5306 | 0.9183 | 0.8787 | 0.8467 | 0.8825 | 0.6828 | 0.7746 | 0.4430 | 0.8323 | 0.7363 | 0.8076 | | 0.0977 | 34.59 | 12800 | 0.5308 | 0.7365 | 0.8292 | 0.8982 | 0.9490 | 0.8074 | 0.8773 | 0.5335 | 0.9170 | 0.8767 | 0.8431 | 0.8821 | 0.6830 | 0.7741 | 0.4437 | 0.8315 | 0.7353 | 0.8058 | | 0.0868 | 34.65 | 12820 | 0.5418 | 0.7351 | 0.8265 | 0.8974 | 0.9477 | 0.8015 | 0.8787 | 0.5256 | 0.9197 | 0.8740 | 0.8384 | 0.8827 | 0.6824 | 0.7740 | 0.4434 | 0.8285 | 0.7345 | 0.8004 | | 0.0608 | 34.7 | 12840 | 0.5251 | 0.7367 | 0.8259 | 0.8986 | 0.9502 | 0.8028 | 0.8760 | 0.5204 | 0.9217 | 0.8674 | 0.8432 | 0.8824 | 0.6839 | 0.7762 | 0.4426 | 0.8315 | 0.7364 | 0.8041 | | 0.0687 | 34.76 | 12860 | 0.5354 | 0.7387 | 0.8303 | 0.8990 | 0.9505 | 0.8126 | 0.8771 | 0.5409 | 0.9185 | 0.8670 | 0.8458 | 0.8825 | 0.6854 | 0.7761 | 0.4510 | 0.8328 | 0.7371 | 0.8061 | | 0.3096 | 34.81 | 12880 | 0.5336 | 0.7377 | 0.8302 | 0.8983 | 0.9494 | 0.8098 | 0.8782 | 0.5466 | 0.9181 | 0.8675 | 0.8421 | 0.8828 | 0.6852 | 0.7750 | 0.4515 | 0.8310 | 0.7359 | 0.8022 | | 0.104 | 34.86 | 12900 | 0.5582 | 0.7363 | 0.8308 | 0.8977 | 0.9489 | 0.8146 | 0.8857 | 0.5545 | 0.9183 | 0.8551 | 0.8383 | 0.8829 | 0.6853 | 0.7744 | 0.4456 | 0.8303 | 0.7354 | 0.8003 | | 0.0936 | 34.92 | 12920 | 0.5128 | 0.7386 | 0.8301 | 0.8993 | 0.9491 | 0.8023 | 0.8804 | 0.5496 | 0.9213 | 0.8595 | 0.8484 | 0.8835 | 0.6845 | 0.7758 | 0.4474 | 0.8335 | 0.7377 | 0.8079 | | 0.0768 | 34.97 | 12940 | 0.5274 | 0.7385 | 0.8312 | 0.8990 | 0.9490 | 0.8079 | 0.8759 | 0.5525 | 0.9196 | 0.8676 | 0.8461 | 0.8827 | 0.6846 | 0.7748 | 0.4492 | 0.8335 | 0.7374 | 0.8073 | | 0.1264 | 35.03 | 12960 | 0.5331 | 0.7383 | 0.8323 | 0.8987 | 0.9497 | 0.8104 | 0.8749 | 0.5549 | 0.9156 | 0.8732 | 0.8474 | 0.8823 | 0.6849 | 0.7751 | 0.4484 | 0.8325 | 0.7367 | 0.8085 | | 0.0914 | 35.08 | 12980 | 0.5402 | 0.7383 | 0.8299 | 0.8989 | 0.9493 | 0.7895 | 0.8738 | 0.5463 | 0.9162 | 0.8866 | 0.8476 | 0.8828 | 0.6809 | 0.7751 | 0.4516 | 0.8322 | 0.7368 | 0.8086 | | 0.1519 | 35.14 | 13000 | 0.5242 | 0.7397 | 0.8319 | 0.8995 | 0.9494 | 0.8131 | 0.8736 | 0.5449 | 0.9180 | 0.8751 | 0.8495 | 0.8827 | 0.6858 | 0.7766 | 0.4513 | 0.8336 | 0.7388 | 0.8088 | | 0.1142 | 35.19 | 13020 | 0.5217 | 0.7387 | 0.8302 | 0.8991 | 0.9510 | 0.8009 | 0.8777 | 0.5456 | 0.9173 | 0.8752 | 0.8434 | 0.8824 | 0.6836 | 0.7757 | 0.4511 | 0.8332 | 0.7381 | 0.8066 | | 0.0806 | 35.24 | 13040 | 0.5147 | 0.7397 | 0.8316 | 0.8996 | 0.9490 | 0.8127 | 0.8787 | 0.5437 | 0.9202 | 0.8701 | 0.8468 | 0.8829 | 0.6864 | 0.7769 | 0.4506 | 0.8345 | 0.7390 | 0.8077 | | 0.042 | 35.3 | 13060 | 0.5273 | 0.7394 | 0.8318 | 0.8994 | 0.9489 | 0.8131 | 0.8806 | 0.5418 | 0.9181 | 0.8722 | 0.8480 | 0.8829 | 0.6867 | 0.7763 | 0.4487 | 0.8338 | 0.7393 | 0.8080 | | 0.0955 | 35.35 | 13080 | 0.5273 | 0.7391 | 0.8312 | 0.8993 | 0.9464 | 0.8177 | 0.8766 | 0.5423 | 0.9234 | 0.8612 | 0.8509 | 0.8829 | 0.6871 | 0.7765 | 0.4471 | 0.8333 | 0.7381 | 0.8088 | | 0.1172 | 35.41 | 13100 | 0.5268 | 0.7382 | 0.8307 | 0.8989 | 0.9475 | 0.8181 | 0.8754 | 0.5360 | 0.9199 | 0.8677 | 0.8502 | 0.8823 | 0.6860 | 0.7749 | 0.4459 | 0.8323 | 0.7373 | 0.8087 | | 0.0279 | 35.46 | 13120 | 0.5304 | 0.7383 | 0.8323 | 0.8987 | 0.9488 | 0.8143 | 0.8835 | 0.5400 | 0.9128 | 0.8778 | 0.8487 | 0.8822 | 0.6846 | 0.7745 | 0.4498 | 0.8317 | 0.7368 | 0.8083 | | 0.1464 | 35.51 | 13140 | 0.5380 | 0.7386 | 0.8325 | 0.8988 | 0.9484 | 0.8118 | 0.8862 | 0.5444 | 0.9134 | 0.8745 | 0.8492 | 0.8822 | 0.6846 | 0.7744 | 0.4502 | 0.8317 | 0.7390 | 0.8083 | | 0.1304 | 35.57 | 13160 | 0.5651 | 0.7343 | 0.8276 | 0.8965 | 0.9485 | 0.8166 | 0.8783 | 0.5331 | 0.9191 | 0.8719 | 0.8256 | 0.8821 | 0.6848 | 0.7749 | 0.4454 | 0.8267 | 0.7367 | 0.7898 | | 0.0506 | 35.62 | 13180 | 0.5187 | 0.7387 | 0.8317 | 0.8991 | 0.9486 | 0.8162 | 0.8809 | 0.5396 | 0.9179 | 0.8721 | 0.8465 | 0.8828 | 0.6856 | 0.7776 | 0.4464 | 0.8330 | 0.7387 | 0.8066 | | 0.0938 | 35.68 | 13200 | 0.5224 | 0.7389 | 0.8313 | 0.8992 | 0.9499 | 0.8151 | 0.8771 | 0.5358 | 0.9158 | 0.8746 | 0.8506 | 0.8820 | 0.6855 | 0.7775 | 0.4463 | 0.8331 | 0.7390 | 0.8089 | | 0.2586 | 35.73 | 13220 | 0.5345 | 0.7379 | 0.8301 | 0.8987 | 0.9509 | 0.8169 | 0.8770 | 0.5339 | 0.9164 | 0.8699 | 0.8456 | 0.8814 | 0.6859 | 0.7771 | 0.4451 | 0.8326 | 0.7363 | 0.8068 | | 0.1677 | 35.78 | 13240 | 0.5339 | 0.7383 | 0.8304 | 0.8991 | 0.9481 | 0.8208 | 0.8798 | 0.5237 | 0.9187 | 0.8752 | 0.8465 | 0.8824 | 0.6863 | 0.7770 | 0.4457 | 0.8329 | 0.7369 | 0.8068 | | 0.157 | 35.84 | 13260 | 0.5235 | 0.7393 | 0.8318 | 0.8994 | 0.9498 | 0.8188 | 0.8780 | 0.5399 | 0.9176 | 0.8705 | 0.8484 | 0.8825 | 0.6867 | 0.7775 | 0.4484 | 0.8337 | 0.7382 | 0.8081 | | 0.1031 | 35.89 | 13280 | 0.5346 | 0.7388 | 0.8314 | 0.8991 | 0.9468 | 0.8162 | 0.8771 | 0.5510 | 0.9233 | 0.8562 | 0.8493 | 0.8825 | 0.6855 | 0.7763 | 0.4497 | 0.8334 | 0.7359 | 0.8084 | | 0.2119 | 35.95 | 13300 | 0.5344 | 0.7389 | 0.8326 | 0.8989 | 0.9485 | 0.8221 | 0.8743 | 0.5479 | 0.9173 | 0.8678 | 0.8503 | 0.8821 | 0.6858 | 0.7751 | 0.4515 | 0.8325 | 0.7366 | 0.8084 | | 0.1023 | 36.0 | 13320 | 0.5377 | 0.7389 | 0.8322 | 0.8990 | 0.9505 | 0.8065 | 0.8784 | 0.5544 | 0.9146 | 0.8732 | 0.8479 | 0.8825 | 0.6854 | 0.7758 | 0.4504 | 0.8328 | 0.7372 | 0.8081 | | 0.0772 | 36.05 | 13340 | 0.5341 | 0.7392 | 0.8323 | 0.8991 | 0.9487 | 0.8201 | 0.8730 | 0.5439 | 0.9173 | 0.8728 | 0.8502 | 0.8822 | 0.6864 | 0.7754 | 0.4517 | 0.8328 | 0.7369 | 0.8089 | | 0.0549 | 36.11 | 13360 | 0.5335 | 0.7384 | 0.8282 | 0.8992 | 0.9478 | 0.8071 | 0.8752 | 0.5212 | 0.9212 | 0.8759 | 0.8493 | 0.8828 | 0.6858 | 0.7758 | 0.4472 | 0.8322 | 0.7366 | 0.8082 | | 0.1126 | 36.16 | 13380 | 0.5259 | 0.7392 | 0.8319 | 0.8992 | 0.9479 | 0.8153 | 0.8780 | 0.5423 | 0.9190 | 0.8719 | 0.8487 | 0.8829 | 0.6857 | 0.7768 | 0.4511 | 0.8329 | 0.7366 | 0.8088 | | 0.1208 | 36.22 | 13400 | 0.5402 | 0.7391 | 0.8330 | 0.8990 | 0.9484 | 0.8146 | 0.8763 | 0.5489 | 0.9158 | 0.8782 | 0.8486 | 0.8828 | 0.6853 | 0.7763 | 0.4523 | 0.8325 | 0.7362 | 0.8086 | | 0.1396 | 36.27 | 13420 | 0.5348 | 0.7392 | 0.8334 | 0.8991 | 0.9476 | 0.8169 | 0.8833 | 0.5490 | 0.9168 | 0.8726 | 0.8474 | 0.8828 | 0.6852 | 0.7759 | 0.4519 | 0.8331 | 0.7373 | 0.8081 | | 0.0582 | 36.32 | 13440 | 0.5360 | 0.7391 | 0.8320 | 0.8992 | 0.9479 | 0.8166 | 0.8830 | 0.5426 | 0.9190 | 0.8660 | 0.8486 | 0.8826 | 0.6855 | 0.7758 | 0.4505 | 0.8333 | 0.7373 | 0.8088 | | 0.1676 | 36.38 | 13460 | 0.5301 | 0.7386 | 0.8291 | 0.8992 | 0.9473 | 0.7953 | 0.8841 | 0.5340 | 0.9191 | 0.8712 | 0.8530 | 0.8826 | 0.6826 | 0.7759 | 0.4495 | 0.8329 | 0.7380 | 0.8086 | | 0.1212 | 36.43 | 13480 | 0.5301 | 0.7383 | 0.8296 | 0.8990 | 0.9477 | 0.7980 | 0.8845 | 0.5346 | 0.9176 | 0.8739 | 0.8505 | 0.8826 | 0.6826 | 0.7755 | 0.4498 | 0.8322 | 0.7368 | 0.8083 | | 0.0525 | 36.49 | 13500 | 0.5273 | 0.7392 | 0.8336 | 0.8991 | 0.9486 | 0.8169 | 0.8764 | 0.5528 | 0.9157 | 0.8750 | 0.8497 | 0.8827 | 0.6850 | 0.7774 | 0.4514 | 0.8331 | 0.7372 | 0.8079 | | 0.0755 | 36.54 | 13520 | 0.5342 | 0.7383 | 0.8344 | 0.8986 | 0.9466 | 0.8172 | 0.8819 | 0.5625 | 0.9163 | 0.8691 | 0.8473 | 0.8827 | 0.6842 | 0.7765 | 0.4496 | 0.8322 | 0.7355 | 0.8076 | | 0.1689 | 36.59 | 13540 | 0.5405 | 0.7392 | 0.8339 | 0.8991 | 0.9481 | 0.8190 | 0.8800 | 0.5542 | 0.9162 | 0.8707 | 0.8494 | 0.8827 | 0.6853 | 0.7776 | 0.4503 | 0.8329 | 0.7374 | 0.8084 | | 0.0651 | 36.65 | 13560 | 0.5351 | 0.7391 | 0.8331 | 0.8990 | 0.9487 | 0.8117 | 0.8804 | 0.5501 | 0.9151 | 0.8788 | 0.8468 | 0.8831 | 0.6848 | 0.7772 | 0.4518 | 0.8325 | 0.7365 | 0.8074 | | 0.1972 | 36.7 | 13580 | 0.5330 | 0.7388 | 0.8338 | 0.8987 | 0.9483 | 0.8113 | 0.8816 | 0.5547 | 0.9120 | 0.8769 | 0.8515 | 0.8824 | 0.6842 | 0.7762 | 0.4519 | 0.8314 | 0.7375 | 0.8078 | | 0.08 | 36.76 | 13600 | 0.5254 | 0.7389 | 0.8309 | 0.8991 | 0.9489 | 0.8133 | 0.8782 | 0.5328 | 0.9159 | 0.8753 | 0.8521 | 0.8823 | 0.6851 | 0.7762 | 0.4501 | 0.8323 | 0.7379 | 0.8081 | | 0.0704 | 36.81 | 13620 | 0.5285 | 0.7386 | 0.8331 | 0.8987 | 0.9479 | 0.8222 | 0.8704 | 0.5554 | 0.9178 | 0.8666 | 0.8514 | 0.8824 | 0.6850 | 0.7754 | 0.4498 | 0.8321 | 0.7367 | 0.8087 | | 0.0639 | 36.86 | 13640 | 0.5181 | 0.7388 | 0.8338 | 0.8987 | 0.9500 | 0.8044 | 0.8804 | 0.5588 | 0.9095 | 0.8815 | 0.8518 | 0.8828 | 0.6842 | 0.7763 | 0.4511 | 0.8315 | 0.7376 | 0.8082 | | 0.096 | 36.92 | 13660 | 0.5198 | 0.7390 | 0.8339 | 0.8988 | 0.9499 | 0.8015 | 0.8803 | 0.5663 | 0.9112 | 0.8768 | 0.8516 | 0.8829 | 0.6840 | 0.7761 | 0.4514 | 0.8320 | 0.7374 | 0.8093 | | 0.0976 | 36.97 | 13680 | 0.5320 | 0.7393 | 0.8320 | 0.8992 | 0.9502 | 0.8123 | 0.8803 | 0.5488 | 0.9160 | 0.8656 | 0.8505 | 0.8827 | 0.6859 | 0.7769 | 0.4505 | 0.8326 | 0.7370 | 0.8092 | | 0.0745 | 37.03 | 13700 | 0.5309 | 0.7392 | 0.8315 | 0.8993 | 0.9480 | 0.8192 | 0.8776 | 0.5325 | 0.9182 | 0.8755 | 0.8492 | 0.8830 | 0.6868 | 0.7766 | 0.4500 | 0.8324 | 0.7367 | 0.8089 | | 0.1075 | 37.08 | 13720 | 0.5261 | 0.7395 | 0.8328 | 0.8992 | 0.9468 | 0.8208 | 0.8817 | 0.5315 | 0.9157 | 0.8809 | 0.8518 | 0.8830 | 0.6869 | 0.7768 | 0.4516 | 0.8322 | 0.7368 | 0.8089 | | 0.0482 | 37.14 | 13740 | 0.5381 | 0.7394 | 0.8348 | 0.8990 | 0.9473 | 0.8253 | 0.8810 | 0.5503 | 0.9148 | 0.8746 | 0.8500 | 0.8830 | 0.6865 | 0.7763 | 0.4525 | 0.8322 | 0.7364 | 0.8091 | | 0.0523 | 37.19 | 13760 | 0.5222 | 0.7394 | 0.8322 | 0.8993 | 0.9494 | 0.8234 | 0.8718 | 0.5390 | 0.9178 | 0.8743 | 0.8496 | 0.8830 | 0.6863 | 0.7769 | 0.4512 | 0.8331 | 0.7364 | 0.8089 | | 0.0598 | 37.24 | 13780 | 0.5314 | 0.7393 | 0.8336 | 0.8991 | 0.9492 | 0.8208 | 0.8777 | 0.5523 | 0.9164 | 0.8720 | 0.8466 | 0.8831 | 0.6859 | 0.7773 | 0.4519 | 0.8330 | 0.7355 | 0.8081 | | 0.1045 | 37.3 | 13800 | 0.5470 | 0.7396 | 0.8344 | 0.8991 | 0.9484 | 0.8191 | 0.8745 | 0.5594 | 0.9158 | 0.8722 | 0.8515 | 0.8827 | 0.6856 | 0.7772 | 0.4527 | 0.8330 | 0.7371 | 0.8093 | | 0.1168 | 37.35 | 13820 | 0.5264 | 0.7396 | 0.8333 | 0.8993 | 0.9491 | 0.8137 | 0.8834 | 0.5464 | 0.9141 | 0.8768 | 0.8496 | 0.8830 | 0.6862 | 0.7774 | 0.4521 | 0.8330 | 0.7371 | 0.8087 | | 0.1679 | 37.41 | 13840 | 0.5290 | 0.7392 | 0.8322 | 0.8993 | 0.9479 | 0.8214 | 0.8827 | 0.5310 | 0.9163 | 0.8756 | 0.8505 | 0.8825 | 0.6863 | 0.7772 | 0.4490 | 0.8330 | 0.7373 | 0.8089 | | 0.0859 | 37.46 | 13860 | 0.5377 | 0.7391 | 0.8321 | 0.8991 | 0.9487 | 0.8111 | 0.8803 | 0.5471 | 0.9166 | 0.8707 | 0.8501 | 0.8824 | 0.6850 | 0.7768 | 0.4513 | 0.8327 | 0.7366 | 0.8092 | | 0.1674 | 37.51 | 13880 | 0.5456 | 0.7385 | 0.8318 | 0.8988 | 0.9493 | 0.8154 | 0.8834 | 0.5438 | 0.9161 | 0.8680 | 0.8464 | 0.8822 | 0.6851 | 0.7765 | 0.4504 | 0.8324 | 0.7350 | 0.8080 | | 0.0637 | 37.57 | 13900 | 0.5408 | 0.7391 | 0.8341 | 0.8987 | 0.9465 | 0.8171 | 0.8833 | 0.5540 | 0.9149 | 0.8715 | 0.8513 | 0.8822 | 0.6850 | 0.7761 | 0.4536 | 0.8317 | 0.7361 | 0.8088 | | 0.1859 | 37.62 | 13920 | 0.5304 | 0.7395 | 0.8323 | 0.8992 | 0.9481 | 0.8129 | 0.8801 | 0.5512 | 0.9184 | 0.8647 | 0.8511 | 0.8825 | 0.6854 | 0.7775 | 0.4522 | 0.8326 | 0.7369 | 0.8097 | | 0.0688 | 37.68 | 13940 | 0.5176 | 0.7395 | 0.8334 | 0.8992 | 0.9488 | 0.8211 | 0.8815 | 0.5430 | 0.9147 | 0.8750 | 0.8497 | 0.8826 | 0.6854 | 0.7779 | 0.4523 | 0.8328 | 0.7365 | 0.8088 | | 0.5325 | 37.73 | 13960 | 0.5307 | 0.7398 | 0.8349 | 0.8992 | 0.9480 | 0.8216 | 0.8806 | 0.5543 | 0.9143 | 0.8754 | 0.8504 | 0.8828 | 0.6854 | 0.7773 | 0.4544 | 0.8327 | 0.7372 | 0.8089 | | 0.0912 | 37.78 | 13980 | 0.5272 | 0.7400 | 0.8332 | 0.8994 | 0.9478 | 0.8166 | 0.8790 | 0.5486 | 0.9175 | 0.8715 | 0.8517 | 0.8829 | 0.6858 | 0.7772 | 0.4541 | 0.8333 | 0.7377 | 0.8091 | | 0.1611 | 37.84 | 14000 | 0.5195 | 0.7398 | 0.8328 | 0.8995 | 0.9484 | 0.8215 | 0.8740 | 0.5455 | 0.9189 | 0.8695 | 0.8517 | 0.8827 | 0.6858 | 0.7777 | 0.4520 | 0.8338 | 0.7378 | 0.8089 | | 0.0972 | 37.89 | 14020 | 0.5306 | 0.7393 | 0.8330 | 0.8992 | 0.9492 | 0.8177 | 0.8777 | 0.5516 | 0.9168 | 0.8687 | 0.8490 | 0.8825 | 0.6854 | 0.7767 | 0.4524 | 0.8333 | 0.7365 | 0.8085 | | 0.078 | 37.95 | 14040 | 0.5291 | 0.7394 | 0.8328 | 0.8992 | 0.9487 | 0.8202 | 0.8806 | 0.5436 | 0.9167 | 0.8696 | 0.8499 | 0.8824 | 0.6856 | 0.7767 | 0.4523 | 0.8333 | 0.7369 | 0.8086 | | 0.1464 | 38.0 | 14060 | 0.5275 | 0.7391 | 0.8328 | 0.8990 | 0.9483 | 0.8008 | 0.8802 | 0.5561 | 0.9143 | 0.8793 | 0.8508 | 0.8827 | 0.6833 | 0.7760 | 0.4536 | 0.8327 | 0.7369 | 0.8083 | | 0.1374 | 38.05 | 14080 | 0.5259 | 0.7397 | 0.8354 | 0.8991 | 0.9485 | 0.8217 | 0.8781 | 0.5607 | 0.9141 | 0.8740 | 0.8506 | 0.8827 | 0.6857 | 0.7767 | 0.4535 | 0.8333 | 0.7377 | 0.8085 | | 0.1035 | 38.11 | 14100 | 0.5415 | 0.7395 | 0.8348 | 0.8990 | 0.9471 | 0.8227 | 0.8788 | 0.5588 | 0.9169 | 0.8717 | 0.8479 | 0.8828 | 0.6856 | 0.7759 | 0.4542 | 0.8331 | 0.7366 | 0.8080 | | 0.098 | 38.16 | 14120 | 0.5428 | 0.7385 | 0.8286 | 0.8995 | 0.9477 | 0.8221 | 0.8758 | 0.5146 | 0.9234 | 0.8672 | 0.8496 | 0.8826 | 0.6863 | 0.7767 | 0.4448 | 0.8334 | 0.7371 | 0.8086 | | 0.573 | 38.22 | 14140 | 0.5271 | 0.7394 | 0.8321 | 0.8993 | 0.9489 | 0.8180 | 0.8740 | 0.5461 | 0.9194 | 0.8697 | 0.8484 | 0.8827 | 0.6859 | 0.7765 | 0.4521 | 0.8337 | 0.7367 | 0.8083 | | 0.0659 | 38.27 | 14160 | 0.5290 | 0.7400 | 0.8331 | 0.8996 | 0.9474 | 0.8144 | 0.8809 | 0.5434 | 0.9176 | 0.8781 | 0.8496 | 0.8831 | 0.6863 | 0.7773 | 0.4531 | 0.8338 | 0.7382 | 0.8082 | | 0.0912 | 38.32 | 14180 | 0.5319 | 0.7397 | 0.8336 | 0.8993 | 0.9486 | 0.8220 | 0.8801 | 0.5421 | 0.9152 | 0.8777 | 0.8494 | 0.8825 | 0.6864 | 0.7771 | 0.4524 | 0.8334 | 0.7378 | 0.8085 | | 0.156 | 38.38 | 14200 | 0.5256 | 0.7398 | 0.8346 | 0.8992 | 0.9482 | 0.8143 | 0.8838 | 0.5568 | 0.9138 | 0.8746 | 0.8508 | 0.8825 | 0.6852 | 0.7770 | 0.4542 | 0.8330 | 0.7379 | 0.8089 | | 0.0975 | 38.43 | 14220 | 0.5490 | 0.7387 | 0.8318 | 0.8988 | 0.9493 | 0.8094 | 0.8773 | 0.5541 | 0.9176 | 0.8687 | 0.8465 | 0.8824 | 0.6844 | 0.7761 | 0.4518 | 0.8326 | 0.7352 | 0.8081 | | 0.1479 | 38.49 | 14240 | 0.5313 | 0.7394 | 0.8334 | 0.8990 | 0.9478 | 0.8058 | 0.8829 | 0.5574 | 0.9147 | 0.8737 | 0.8518 | 0.8825 | 0.6843 | 0.7765 | 0.4535 | 0.8327 | 0.7371 | 0.8090 | | 0.1092 | 38.54 | 14260 | 0.5362 | 0.7389 | 0.8315 | 0.8989 | 0.9498 | 0.8128 | 0.8744 | 0.5438 | 0.9160 | 0.8750 | 0.8490 | 0.8820 | 0.6853 | 0.7762 | 0.4517 | 0.8327 | 0.7356 | 0.8087 | | 0.046 | 38.59 | 14280 | 0.5155 | 0.7400 | 0.8343 | 0.8995 | 0.9476 | 0.8128 | 0.8830 | 0.5597 | 0.9171 | 0.8680 | 0.8516 | 0.8830 | 0.6858 | 0.7783 | 0.4517 | 0.8339 | 0.7379 | 0.8093 | | 0.0599 | 38.65 | 14300 | 0.5206 | 0.7401 | 0.8333 | 0.8995 | 0.9484 | 0.8130 | 0.8800 | 0.5507 | 0.9167 | 0.8727 | 0.8519 | 0.8826 | 0.6860 | 0.7782 | 0.4524 | 0.8339 | 0.7383 | 0.8093 | | 0.1329 | 38.7 | 14320 | 0.5449 | 0.7397 | 0.8328 | 0.8995 | 0.9497 | 0.8193 | 0.8833 | 0.5422 | 0.9164 | 0.8713 | 0.8472 | 0.8825 | 0.6865 | 0.7781 | 0.4517 | 0.8341 | 0.7371 | 0.8083 | | 0.0821 | 38.76 | 14340 | 0.5242 | 0.7399 | 0.8315 | 0.8998 | 0.9495 | 0.8197 | 0.8838 | 0.5410 | 0.9206 | 0.8559 | 0.8502 | 0.8825 | 0.6868 | 0.7789 | 0.4494 | 0.8346 | 0.7375 | 0.8094 | | 0.1622 | 38.81 | 14360 | 0.5304 | 0.7400 | 0.8328 | 0.8996 | 0.9486 | 0.8142 | 0.8838 | 0.5482 | 0.9178 | 0.8672 | 0.8502 | 0.8827 | 0.6859 | 0.7784 | 0.4517 | 0.8340 | 0.7379 | 0.8093 | | 0.6038 | 38.86 | 14380 | 0.5273 | 0.7400 | 0.8340 | 0.8995 | 0.9486 | 0.8158 | 0.8814 | 0.5519 | 0.9163 | 0.8750 | 0.8487 | 0.8831 | 0.6863 | 0.7782 | 0.4523 | 0.8339 | 0.7376 | 0.8088 | | 0.0882 | 38.92 | 14400 | 0.5216 | 0.7399 | 0.8334 | 0.8994 | 0.9478 | 0.8081 | 0.8808 | 0.5564 | 0.9170 | 0.8722 | 0.8512 | 0.8828 | 0.6850 | 0.7774 | 0.4531 | 0.8335 | 0.7378 | 0.8094 | | 0.0464 | 38.97 | 14420 | 0.5264 | 0.7395 | 0.8322 | 0.8994 | 0.9491 | 0.8145 | 0.8799 | 0.5493 | 0.9188 | 0.8645 | 0.8490 | 0.8825 | 0.6855 | 0.7774 | 0.4512 | 0.8337 | 0.7371 | 0.8091 | | 0.0572 | 39.03 | 14440 | 0.5370 | 0.7393 | 0.8323 | 0.8993 | 0.9487 | 0.8154 | 0.8862 | 0.5402 | 0.9167 | 0.8725 | 0.8464 | 0.8827 | 0.6857 | 0.7769 | 0.4512 | 0.8336 | 0.7370 | 0.8081 | | 0.0965 | 39.08 | 14460 | 0.5374 | 0.7396 | 0.8331 | 0.8993 | 0.9504 | 0.8203 | 0.8806 | 0.5438 | 0.9144 | 0.8749 | 0.8474 | 0.8822 | 0.6857 | 0.7778 | 0.4519 | 0.8338 | 0.7374 | 0.8082 | | 0.1418 | 39.14 | 14480 | 0.5310 | 0.7398 | 0.8338 | 0.8994 | 0.9493 | 0.8191 | 0.8786 | 0.5479 | 0.9149 | 0.8773 | 0.8493 | 0.8826 | 0.6858 | 0.7778 | 0.4522 | 0.8338 | 0.7375 | 0.8087 | | 0.0986 | 39.19 | 14500 | 0.5331 | 0.7397 | 0.8355 | 0.8992 | 0.9475 | 0.8126 | 0.8802 | 0.5685 | 0.9151 | 0.8751 | 0.8498 | 0.8830 | 0.6855 | 0.7775 | 0.4512 | 0.8336 | 0.7381 | 0.8091 | | 0.08 | 39.24 | 14520 | 0.5425 | 0.7398 | 0.8341 | 0.8992 | 0.9467 | 0.8071 | 0.8836 | 0.5573 | 0.9158 | 0.8774 | 0.8508 | 0.8828 | 0.6848 | 0.7768 | 0.4536 | 0.8334 | 0.7378 | 0.8091 | | 0.0534 | 39.3 | 14540 | 0.5327 | 0.7402 | 0.8346 | 0.8997 | 0.9478 | 0.8183 | 0.8865 | 0.5545 | 0.9171 | 0.8686 | 0.8495 | 0.8830 | 0.6863 | 0.7785 | 0.4509 | 0.8345 | 0.7394 | 0.8090 | | 0.0997 | 39.35 | 14560 | 0.5406 | 0.7394 | 0.8342 | 0.8991 | 0.9460 | 0.8072 | 0.8766 | 0.5651 | 0.9178 | 0.8756 | 0.8511 | 0.8827 | 0.6840 | 0.7764 | 0.4531 | 0.8332 | 0.7375 | 0.8091 | | 0.0712 | 39.41 | 14580 | 0.5271 | 0.7398 | 0.8341 | 0.8993 | 0.9483 | 0.8204 | 0.8747 | 0.5573 | 0.9172 | 0.8699 | 0.8513 | 0.8823 | 0.6852 | 0.7768 | 0.4531 | 0.8336 | 0.7379 | 0.8094 | | 0.1451 | 39.46 | 14600 | 0.5329 | 0.7401 | 0.8326 | 0.8997 | 0.9479 | 0.8153 | 0.8851 | 0.5427 | 0.9182 | 0.8664 | 0.8522 | 0.8825 | 0.6858 | 0.7778 | 0.4521 | 0.8340 | 0.7391 | 0.8096 | | 0.1405 | 39.51 | 14620 | 0.5290 | 0.7400 | 0.8329 | 0.8996 | 0.9485 | 0.8205 | 0.8760 | 0.5456 | 0.9187 | 0.8705 | 0.8507 | 0.8826 | 0.6857 | 0.7777 | 0.4519 | 0.8342 | 0.7383 | 0.8093 | | 0.1406 | 39.57 | 14640 | 0.5368 | 0.7398 | 0.8319 | 0.8996 | 0.9474 | 0.8170 | 0.8819 | 0.5364 | 0.9193 | 0.8704 | 0.8508 | 0.8824 | 0.6862 | 0.7775 | 0.4515 | 0.8337 | 0.7383 | 0.8092 | | 0.0965 | 39.62 | 14660 | 0.5269 | 0.7403 | 0.8352 | 0.8996 | 0.9463 | 0.8195 | 0.8850 | 0.5576 | 0.9181 | 0.8699 | 0.8500 | 0.8832 | 0.6859 | 0.7780 | 0.4527 | 0.8342 | 0.7390 | 0.8091 | | 0.1628 | 39.68 | 14680 | 0.5228 | 0.7402 | 0.8352 | 0.8995 | 0.9471 | 0.8210 | 0.8811 | 0.5540 | 0.9159 | 0.8764 | 0.8507 | 0.8828 | 0.6860 | 0.7778 | 0.4537 | 0.8338 | 0.7385 | 0.8091 | | 0.099 | 39.73 | 14700 | 0.5323 | 0.7401 | 0.8349 | 0.8995 | 0.9481 | 0.8192 | 0.8818 | 0.5544 | 0.9155 | 0.8757 | 0.8494 | 0.8829 | 0.6859 | 0.7779 | 0.4528 | 0.8340 | 0.7386 | 0.8087 | | 0.0486 | 39.78 | 14720 | 0.5261 | 0.7403 | 0.8337 | 0.8997 | 0.9476 | 0.8216 | 0.8811 | 0.5423 | 0.9179 | 0.8762 | 0.8493 | 0.8830 | 0.6863 | 0.7783 | 0.4524 | 0.8343 | 0.7388 | 0.8089 | | 0.1896 | 39.84 | 14740 | 0.5485 | 0.7398 | 0.8335 | 0.8994 | 0.9495 | 0.8170 | 0.8746 | 0.5536 | 0.9167 | 0.8745 | 0.8487 | 0.8827 | 0.6857 | 0.7776 | 0.4523 | 0.8339 | 0.7378 | 0.8089 | | 0.1565 | 39.89 | 14760 | 0.5295 | 0.7399 | 0.8316 | 0.8997 | 0.9494 | 0.8081 | 0.8741 | 0.5539 | 0.9211 | 0.8637 | 0.8505 | 0.8828 | 0.6851 | 0.7775 | 0.4519 | 0.8345 | 0.7380 | 0.8097 | | 0.7861 | 39.95 | 14780 | 0.5195 | 0.7403 | 0.8344 | 0.8996 | 0.9476 | 0.8148 | 0.8820 | 0.5559 | 0.9170 | 0.8717 | 0.8516 | 0.8829 | 0.6859 | 0.7779 | 0.4532 | 0.8341 | 0.7392 | 0.8094 | | 0.0816 | 40.0 | 14800 | 0.5328 | 0.7402 | 0.8345 | 0.8995 | 0.9472 | 0.8139 | 0.8796 | 0.5587 | 0.9175 | 0.8744 | 0.8503 | 0.8830 | 0.6854 | 0.7777 | 0.4534 | 0.8340 | 0.7384 | 0.8091 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.17.1 - Tokenizers 0.15.1
fzzhang/mistral_gsm8k_s_prod_tuneS_unquantized_merged
fzzhang
2024-02-28T19:07:38Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:00:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liminerity/ultra0-half-the-layers
liminerity
2024-02-28T19:07:06Z
115
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/ultra0", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T19:05:38Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - liminerity/ultra0 - liminerity/ultra0 --- # ultra0-half-the-layers ultra0-half-the-layers is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [liminerity/ultra0](https://huggingface.co/liminerity/ultra0) * [liminerity/ultra0](https://huggingface.co/liminerity/ultra0) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/ultra0 layer_range: [0, 12] - model: liminerity/ultra0 layer_range: [0, 12] merge_method: slerp base_model: liminerity/ultra0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
tanatapanun/fine-tuned-BART-20-epochs-wanglab-512-output
tanatapanun
2024-02-28T18:58:50Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-28T18:21:39Z
--- base_model: bart-base tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-BART-20-epochs-wanglab-512-output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-BART-20-epochs-wanglab-512-output This model is a fine-tuned version of [bart-base](https://huggingface.co/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4945 - Rouge1: 0.0871 - Rouge2: 0.0196 - Rougel: 0.0787 - Rougelsum: 0.0787 - Bertscore F1: 0.837 - Bleurt Score: -1.873 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore F1 | Bleurt Score | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------------:|:------------:|:-------:| | No log | 1.0 | 301 | 1.5515 | 0.0293 | 0.0 | 0.0286 | 0.0282 | 0.7994 | -2.1159 | 11.68 | | 4.8736 | 2.0 | 602 | 0.5364 | 0.0738 | 0.0183 | 0.0655 | 0.0653 | 0.8345 | -1.6735 | 20.0 | | 4.8736 | 3.0 | 903 | 0.4811 | 0.071 | 0.0191 | 0.0677 | 0.0677 | 0.8359 | -1.7563 | 20.0 | | 0.5377 | 4.0 | 1204 | 0.4621 | 0.0506 | 0.0125 | 0.0475 | 0.0474 | 0.8566 | -1.8275 | 8.0 | | 0.4145 | 5.0 | 1505 | 0.4496 | 0.0231 | 0.0036 | 0.0237 | 0.0233 | 0.8458 | -1.4636 | 8.0 | | 0.4145 | 6.0 | 1806 | 0.4455 | 0.078 | 0.0194 | 0.0714 | 0.071 | 0.8469 | -1.3815 | 20.0 | | 0.336 | 7.0 | 2107 | 0.4416 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | | 0.336 | 8.0 | 2408 | 0.4440 | 0.0878 | 0.0195 | 0.0794 | 0.0791 | 0.8409 | -1.4561 | 20.0 | | 0.2698 | 9.0 | 2709 | 0.4505 | 0.0231 | 0.0036 | 0.0237 | 0.0233 | 0.8458 | -1.4636 | 8.0 | | 0.2225 | 10.0 | 3010 | 0.4546 | 0.0516 | 0.0101 | 0.0466 | 0.0463 | 0.8355 | -1.61 | 20.0 | | 0.2225 | 11.0 | 3311 | 0.4627 | 0.0877 | 0.0194 | 0.0794 | 0.0791 | 0.8388 | -1.4342 | 20.0 | | 0.1695 | 12.0 | 3612 | 0.4677 | 0.0704 | 0.0128 | 0.0628 | 0.0626 | 0.8218 | -1.8469 | 20.0 | | 0.1695 | 13.0 | 3913 | 0.4716 | 0.0615 | 0.0193 | 0.056 | 0.0557 | 0.8342 | -1.5375 | 20.0 | | 0.132 | 14.0 | 4214 | 0.4754 | 0.064 | 0.0196 | 0.0577 | 0.0576 | 0.839 | -1.8751 | 20.0 | | 0.1122 | 15.0 | 4515 | 0.4837 | 0.0712 | 0.0175 | 0.0644 | 0.0642 | 0.8373 | -1.3366 | 20.0 | | 0.1122 | 16.0 | 4816 | 0.4867 | 0.0817 | 0.01 | 0.0691 | 0.069 | 0.8425 | -1.4584 | 20.0 | | 0.0893 | 17.0 | 5117 | 0.4904 | 0.0712 | 0.0175 | 0.0644 | 0.0642 | 0.8373 | -1.3366 | 20.0 | | 0.0893 | 18.0 | 5418 | 0.4924 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | | 0.08 | 19.0 | 5719 | 0.4934 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | | 0.0706 | 20.0 | 6020 | 0.4945 | 0.0871 | 0.0196 | 0.0787 | 0.0787 | 0.837 | -1.873 | 20.0 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
sarak7/H10_228_769_v1
sarak7
2024-02-28T18:47:13Z
171
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:45:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Weni/ZeroShot-3.3.12-Mistral-7b-Multilanguage-3.2.0-merged
Weni
2024-02-28T18:43:06Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:32:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mohitpc10e/milky-way-galexy
mohitpc10e
2024-02-28T18:41:23Z
2
0
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-28T18:34:36Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### milky-way-galexy Dreambooth model trained by mohitpc10e following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: 23CYBER36 Sample pictures of this concept: ![0](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(3).jpeg) ![1](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(5).jpeg) ![2](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(01).jpeg) ![3](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg(6).jpeg) ![4](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(4).jpeg) ![5](https://huggingface.co/mohitpc10e/milky-way-galexy/resolve/main/sample_images/xzg_(2).jpeg)
numen-tech/Mixtral-8x7B-Instruct-v0.1-w4a16g128asym_1
numen-tech
2024-02-28T18:34:56Z
0
0
null
[ "arxiv:2308.13137", "license:apache-2.0", "region:us" ]
null
2024-02-28T18:31:38Z
--- license: apache-2.0 --- 4-bit [OmniQuant](https://arxiv.org/abs/2308.13137) quantized version of [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). Note that the embedding and the MoE gate weights are not quantized in this version.
tomaszki/gemma-7
tomaszki
2024-02-28T18:27:10Z
119
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T18:19:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lollitor/Sequential9
Lollitor
2024-02-28T18:22:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-02-28T18:22:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Weni/ZeroShot-3.3.12-Mistral-7b-Multilanguage-3.2.0
Weni
2024-02-28T18:15:52Z
0
0
peft
[ "peft", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-02-28T16:56:04Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: ZeroShot-3.3.12-Mistral-7b-Multilanguage-3.2.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ZeroShot-3.3.12-Mistral-7b-Multilanguage-3.2.0 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5918 | 0.13 | 100 | 0.5496 | | 0.5301 | 0.27 | 200 | 0.5062 | | 0.4998 | 0.4 | 300 | 0.4911 | | 0.4977 | 0.53 | 400 | 0.4820 | | 0.477 | 0.67 | 500 | 0.4747 | | 0.4785 | 0.8 | 600 | 0.4698 | | 0.4699 | 0.94 | 700 | 0.4682 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
mi-rei/Cthalpaca-llama2-7b-CT_III_efficient_full
mi-rei
2024-02-28T18:13:35Z
1
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T17:55:43Z
Accuracy: 0.690\ F1 Score: 0.703\ Accuracy for label 0: 0.648\ Accuracy for label 1: 0.731 Classification Report: | | precision | recall | f1-score | support | |--------------|-----------|--------|----------|---------| | 0 | 0.70 | 0.65 | 0.67 | 548 | | 1 | 0.68 | 0.73 | 0.70 | 554 | | accuracy | | | 0.69 | 1102 | | macro avg | 0.69 | 0.69 | 0.69 | 1102 | | weighted avg | 0.69 | 0.69 | 0.69 | 1102 | Confusion Matrix:\ [[355 193 0]\ [149 405 0]\ [ 0 0 0]]
mi-rei/Cthalpaca-llama2-7b
mi-rei
2024-02-28T18:13:13Z
1
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-28T09:34:44Z
Trained on the first 50k rows of - mi-rei/ClinicalTrial-gov-LLaMA
Manish0611/phi2-code
Manish0611
2024-02-28T18:05:32Z
52
1
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-06T10:15:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
accidentalgenuis99/sports-stats-questions-classifier
accidentalgenuis99
2024-02-28T17:54:12Z
106
1
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "autotrain", "en", "dataset:autotrain-nndq9-1xgjv/autotrain-data", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-02-28T17:33:49Z
--- tags: - autotrain - text-classification widget: - text: What are the latest updates on the NBA trade deadline? datasets: - autotrain-nndq9-1xgjv/autotrain-data license: mit language: - en pipeline_tag: text-classification --- # Sports Stats Questions Classifier description: Welcome to the Sports Stats Questions Classifier! This NLP-based text classification tool is designed to classify sports-related questions into different categories based on their content. Whether you're a sports enthusiast, journalist, or data analyst, this tool can help you quickly categorize and organize sports-related queries with ease. overview: This project utilizes natural language processing (NLP) techniques to understand and classify questions related to sports statistics. By analyzing the text of the questions, the classifier assigns them to predefined categories, such as "scores," "players," "stats," "teams," "games," "standings," "schedules," "rosters," or "news." features: - Text Classification: Classify sports stats questions into predefined categories. - Fast and Efficient: Utilizes state-of-the-art NLP models for quick and accurate classification. - Easy Integration: Can be integrated into various applications, websites, or chatbots for seamless user interaction. - Customizable: Easily extend or modify the categories and training data to suit your specific needs. - User-Friendly: Simple and intuitive interface for easy usage by both developers and end-users. ontributing: | Contributions are welcome! If you have any suggestions, feature requests, or bug reports, please open an issue or submit a pull request on GitHub. license: MIT contact: | Feel free to reach out to us with any questions, feedback, or collaboration opportunities. Happy classifying! 🏀🏈⚽️ --- tags: - autotrain - text-classification widget: - text: "What are the latest updates on the NBA trade deadline?" datasets: - autotrain-nndq9-1xgjv/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 1.0594602823257446 f1_macro: 0.703125 f1_micro: 0.8518518518518519 f1_weighted: 0.7916666666666666 precision_macro: 0.6722222222222223 precision_micro: 0.8518518518518519 precision_weighted: 0.7497942386831277 recall_macro: 0.75 recall_micro: 0.8518518518518519 recall_weighted: 0.8518518518518519 accuracy: 0.8518518518518519