modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 06:30:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 06:29:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
baltop/cdp_500
|
baltop
| 2024-01-20T02:51:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-20T02:51:34Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
baltop/cdp_400
|
baltop
| 2024-01-20T02:51:19Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-20T02:51:00Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
baltop/cdp_300
|
baltop
| 2024-01-20T02:49:21Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-20T02:49:02Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
Chen311/Model_1.5
|
Chen311
| 2024-01-20T02:09:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-29T01:07:15Z |
---
license: creativeml-openrail-m
---
|
Vivacem/DeepSeek-67B-MMIQC
|
Vivacem
| 2024-01-20T01:56:09Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2401.09003",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T05:52:12Z |
---
license: apache-2.0
---
DeepSeek-67B-MMIQC is obtained by fine-tuning [DeepSeek-67B](https://huggingface.co/deepseek-ai/deepseek-llm-67b-base) on [MMIQC](https://huggingface.co/datasets/Vivacem/MMIQC).
It achieves 41.0% test accuracy on MATH.
See our [paper](https://arxiv.org/abs/2401.09003) for details.
|
Tillmandev/LunarLander10m
|
Tillmandev
| 2024-01-20T01:52:34Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-16T12:04:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.15 +/- 17.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zelihami/nlpfinalbert0
|
zelihami
| 2024-01-20T01:22:09Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-128k-uncased",
"base_model:finetune:dbmdz/bert-base-turkish-128k-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T00:33:12Z |
---
license: mit
base_model: dbmdz/bert-base-turkish-128k-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: nlpfinalbert0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlpfinalbert0
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3191
- Accuracy: 0.88
- F1: 0.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
UAEpro/whisper-small-ar-2
|
UAEpro
| 2024-01-20T00:42:47Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_16_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-15T20:58:48Z |
---
language:
- ar
license: apache-2.0
base_model: uaepro/whisper-small-ar-2
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Small ar - majed test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: ar
split: test
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 168.22177271055537
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small ar - majed test
This model is a fine-tuned version of [uaepro/whisper-small-ar-2](https://huggingface.co/uaepro/whisper-small-ar-2) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3392
- Wer: 168.2218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1459 | 0.41 | 1000 | 0.3714 | 182.4752 |
| 0.1378 | 0.82 | 2000 | 0.3486 | 177.9993 |
| 0.0738 | 1.24 | 3000 | 0.3513 | 184.2939 |
| 0.0855 | 1.65 | 4000 | 0.3392 | 168.2218 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jdang/openhermes-mistral-dpo-gptq
|
jdang
| 2024-01-20T00:35:18Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-12T17:05:27Z |
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6104
- Rewards/chosen: -0.0458
- Rewards/rejected: -0.4535
- Rewards/accuracies: 0.6875
- Rewards/margins: 0.4077
- Logps/rejected: -390.3771
- Logps/chosen: -149.5892
- Logits/rejected: -1.3692
- Logits/chosen: -1.4352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6865 | 0.01 | 10 | 0.6792 | -0.0093 | -0.0078 | 0.6875 | -0.0015 | -385.9200 | -149.2238 | -1.3698 | -1.4189 |
| 0.6882 | 0.01 | 20 | 0.6660 | -0.0137 | -0.0526 | 0.625 | 0.0389 | -386.3681 | -149.2680 | -1.3729 | -1.4240 |
| 0.6391 | 0.01 | 30 | 0.6446 | 0.0000 | -0.1131 | 0.625 | 0.1131 | -386.9731 | -149.1310 | -1.3737 | -1.4292 |
| 0.639 | 0.02 | 40 | 0.6271 | -0.0337 | -0.2758 | 0.6875 | 0.2421 | -388.6000 | -149.4686 | -1.3729 | -1.4342 |
| 0.6533 | 0.03 | 50 | 0.6104 | -0.0458 | -0.4535 | 0.6875 | 0.4077 | -390.3771 | -149.5892 | -1.3692 | -1.4352 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
arielogg/t5-small-finetuned-en-to-fr
|
arielogg
| 2024-01-20T00:29:44Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-19T22:16:43Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: arielogg/t5-small-finetuned-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arielogg/t5-small-finetuned-en-to-fr
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1390
- Validation Loss: 0.9577
- Train Bleu: 35.5719
- Train Gen Len: 29.4217
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 1.1390 | 0.9577 | 35.5719 | 29.4217 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/WinterGoddess-1.4x-70B-L2-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-20T00:24:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T00:08:10Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
afrideva/tinyllama-python-GGUF
|
afrideva
| 2024-01-20T00:07:17Z | 59 | 0 | null |
[
"gguf",
"code",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"base_model:rahuldshetty/tinyllama-python",
"base_model:quantized:rahuldshetty/tinyllama-python",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-20T00:03:29Z |
---
base_model: rahuldshetty/tinyllama-python
datasets:
- iamtarun/python_code_instructions_18k_alpaca
inference: false
language:
- en
license: apache-2.0
model_creator: rahuldshetty
model_name: tinyllama-python
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- code
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: '### Instruction:
Write a function to find square of a number.
### Response:'
- text: '### Instruction:
Write a function to calculate factorial.
### Response:'
- text: '### Instruction:
Write a function to check whether a number is prime.
### Response:'
---
# rahuldshetty/tinyllama-python-GGUF
Quantized GGUF model files for [tinyllama-python](https://huggingface.co/rahuldshetty/tinyllama-python) from [rahuldshetty](https://huggingface.co/rahuldshetty)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-python.fp16.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.fp16.gguf) | fp16 | 2.20 GB |
| [tinyllama-python.q2_k.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q2_k.gguf) | q2_k | 432.13 MB |
| [tinyllama-python.q3_k_m.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q3_k_m.gguf) | q3_k_m | 548.40 MB |
| [tinyllama-python.q4_k_m.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tinyllama-python.q5_k_m.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tinyllama-python.q6_k.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q6_k.gguf) | q6_k | 903.41 MB |
| [tinyllama-python.q8_0.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
# rahuldshetty/tinyllama-python-gguf
- Base model: [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit)
- Dataset: [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)
- Training Script: [unslothai: Alpaca + TinyLlama + RoPE Scaling full example.ipynb](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)
## Prompt Format
```
### Instruction:
{instruction}
### Response:
```
## Example
```
### Instruction:
Write a function to find cube of a number.
### Response:
```
|
segolilylabs/Lily-Cybersecurity-7B-v0.2-GGUF
|
segolilylabs
| 2024-01-20T00:01:19Z | 3,243 | 16 | null |
[
"gguf",
"cybersecurity",
"cyber security",
"hacking",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-12T02:13:04Z |
---
license: apache-2.0
tags:
- cybersecurity
- cyber security
- hacking
language:
- en
---
My attempt at making GGUF versions of <a href= "https://huggingface.co/segolilylabs/Lily-Cybersecurity-7B-v0.2">segolilylabs/Lily-Cybersecurity-7B-v0.2</a>
|
afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF
|
afrideva
| 2024-01-19T23:54:54Z | 6 | 0 | null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"base_model:davanstrien/TinyLlama-haiku-dpo-v.0.1",
"base_model:quantized:davanstrien/TinyLlama-haiku-dpo-v.0.1",
"region:us",
"conversational"
] |
text-generation
| 2024-01-19T23:42:50Z |
---
base_model: davanstrien/TinyLlama-haiku-dpo-v.0.1
inference: false
model_creator: davanstrien
model_name: TinyLlama-haiku-dpo-v.0.1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# davanstrien/TinyLlama-haiku-dpo-v.0.1-GGUF
Quantized GGUF model files for [TinyLlama-haiku-dpo-v.0.1](https://huggingface.co/davanstrien/TinyLlama-haiku-dpo-v.0.1) from [davanstrien](https://huggingface.co/davanstrien)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama-haiku-dpo-v.0.1.fp16.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.fp16.gguf) | fp16 | 2.20 GB |
| [tinyllama-haiku-dpo-v.0.1.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q2_k.gguf) | q2_k | 432.13 MB |
| [tinyllama-haiku-dpo-v.0.1.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q3_k_m.gguf) | q3_k_m | 548.40 MB |
| [tinyllama-haiku-dpo-v.0.1.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q4_k_m.gguf) | q4_k_m | 667.81 MB |
| [tinyllama-haiku-dpo-v.0.1.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q5_k_m.gguf) | q5_k_m | 782.04 MB |
| [tinyllama-haiku-dpo-v.0.1.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q6_k.gguf) | q6_k | 903.41 MB |
| [tinyllama-haiku-dpo-v.0.1.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-haiku-dpo-v.0.1-GGUF/resolve/main/tinyllama-haiku-dpo-v.0.1.q8_0.gguf) | q8_0 | 1.17 GB |
## Original Model Card:
|
Aneeth/zephyr_7k
|
Aneeth
| 2024-01-19T23:51:26Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-beta-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-beta-GPTQ",
"license:mit",
"region:us"
] | null | 2024-01-17T11:53:37Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-beta-GPTQ
model-index:
- name: zephyr_7k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr_7k
This model is a fine-tuned version of [TheBloke/zephyr-7B-beta-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3761 | 0.23 | 100 | 1.1737 |
| 0.8147 | 0.46 | 200 | 0.4469 |
| 0.3427 | 0.68 | 300 | 0.2869 |
| 0.2726 | 0.91 | 400 | 0.2630 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ
|
MaziyarPanahi
| 2024-01-19T23:48:46Z | 376 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space",
"conversational",
"base_model:MediaTek-Research/Breeze-7B-Instruct-v0_1",
"base_model:finetune:MediaTek-Research/Breeze-7B-Instruct-v0_1"
] |
text-generation
| 2024-01-19T23:46:33Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- safetensors
- mistral
- text-generation
- zh
- en
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- has_space
model_name: Breeze-7B-Instruct-v0_1-GPTQ
base_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
inference: false
model_creator: MediaTek-Research
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ](https://huggingface.co/MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ) is a quantized (GPTQ) version of [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/Breeze-7B-Instruct-v0_1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
MaziyarPanahi/zephyr-7b-beta-GPTQ
|
MaziyarPanahi
| 2024-01-19T23:37:05Z | 20 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"pytorch",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2310.16944",
"base_model:mistralai/Mistral-7B-v0.1",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"conversational",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"license:apache-2.0"
] |
text-generation
| 2024-01-19T23:34:55Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- safetensors
- mistral
- text-generation
- generated_from_trainer
- en
- dataset:HuggingFaceH4/ultrachat_200k
- dataset:HuggingFaceH4/ultrafeedback_binarized
- arxiv:2305.18290
- arxiv:2310.16944
- base_model:mistralai/Mistral-7B-v0.1
- license:mit
- model-index
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: zephyr-7b-beta-GPTQ
base_model: HuggingFaceH4/zephyr-7b-beta
inference: false
model_creator: HuggingFaceH4
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/zephyr-7b-beta-GPTQ](https://huggingface.co/MaziyarPanahi/zephyr-7b-beta-GPTQ) is a quantized (GPTQ) version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/zephyr-7b-beta-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
beeezeee/whisper-large-v0
|
beeezeee
| 2024-01-19T23:32:53Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T17:54:52Z |
---
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v0
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/WinterGoddess-1.4x-70B-L2-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T23:24:40Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T23:05:03Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
samwell/Reinforce-PixelCopter
|
samwell
| 2024-01-19T23:11:22Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T02:11:09Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.80 +/- 25.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/WinterGoddess-1.4x-70B-L2-4.65bpw-h6-exl2
|
LoneStriker
| 2024-01-19T23:05:01Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:52:48Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
duckdoingdev/ppo-LunarLander-v2
|
duckdoingdev
| 2024-01-19T22:57:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T22:57:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.35 +/- 9.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/TenyxChat-8x7B-v1-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T22:47:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T22:32:44Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
LegoClipStars/River_Kendall_RH
|
LegoClipStars
| 2024-01-19T22:46:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:dataautogpt3/OpenDalleV1.1",
"base_model:adapter:dataautogpt3/OpenDalleV1.1",
"license:cc-by-4.0",
"region:us"
] |
text-to-image
| 2024-01-19T22:45:08Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: NEFT
parameters:
negative_prompt: High school student
output:
url: images/5b7538c2190e21a3a865cbe703015bd6.jpg
base_model: dataautogpt3/OpenDalleV1.1
instance_prompt: Please spare me
license: cc-by-4.0
---
# River_Kendall_Rainbow_High
<Gallery />
## Model description
Here's my RVC voice model of River Kendall from Rainbow High
## Trigger words
You should use `Please spare me` to trigger the image generation.
## Download model
[Download](/LegoClipStars/River_Kendall_RH/tree/main) them in the Files & versions tab.
|
Kooten/Euryale-1.4-L2-70B-IQ2-GGUF
|
Kooten
| 2024-01-19T22:43:59Z | 3 | 3 | null |
[
"gguf",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T09:14:54Z |
---
license: llama2
language:
- en
---
# Euryale-1.4-L2-70B IQ2-GGUF
## Description
IQ2-GGUF quants of [Sao10K/Euryale-1.4-L2-70B](https://huggingface.co/Sao10K/Euryale-1.4-L2-70B)
Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.
***NOTE:*** Currently you will need experimental branches of Koboldcpp or Ooba for this to work.
- Nexesenex have compiled Windows binaries [HERE](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.55.1_b1842)
- [llamacpp_0.2.29 branch](https://github.com/oobabooga/text-generation-webui/tree/llamacpp_0.2.29) of Ooba also works
[More info about IQ2](https://github.com/ggerganov/llama.cpp/pull/4897)
# Models
Models: [IQ2-XS](https://huggingface.co/Kooten/Euryale-1.4-L2-70B-IQ2-GGUF/blob/main/Euryale-1.4-L2-70B-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/Euryale-1.4-L2-70B-IQ2-GGUF/blob/main/Euryale-1.4-L2-70B-IQ2_XXS.gguf)
Regular GGUF Quants: [Here](https://huggingface.co/Sao10K/Euryale-1.4-L2-70B-GGUF)
## Prompt Format
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:
```
## Contact
Kooten on discord
|
timuryun/autotrain-ughdn-x1a7j
|
timuryun
| 2024-01-19T22:42:39Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T22:42:35Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
coversia21/RVC_ComoTanMuchachos
|
coversia21
| 2024-01-19T22:39:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:dataautogpt3/OpenDalleV1.1",
"base_model:adapter:dataautogpt3/OpenDalleV1.1",
"license:openrail",
"region:us"
] |
text-to-image
| 2024-01-19T22:34:55Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/muchacho.webp
base_model: dataautogpt3/OpenDalleV1.1
instance_prompt: null
license: openrail
---
# RVC_ComoTanMuchachos
<Gallery />
## Download model
[Download](/coversia21/RVC_ComoTanMuchachos/tree/main) them in the Files & versions tab.
|
ib1368/ppo-CartPole-v1-scratch
|
ib1368
| 2024-01-19T22:32:19Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T22:30:52Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -161.68 +/- 83.93
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ib1368/ppo-CartPole-v1-scratch'
'batch_size': 512
'minibatch_size': 128}
```
|
LoneStriker/TenyxChat-8x7B-v1-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T22:22:16Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T22:10:33Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF
|
afrideva
| 2024-01-19T22:21:52Z | 71 | 2 | null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"es",
"ru",
"zh",
"de",
"fr",
"th",
"ca",
"it",
"ja",
"pl",
"eo",
"eu",
"vi",
"fi",
"hu",
"ar",
"nl",
"da",
"tr",
"ko",
"he",
"id",
"cs",
"bn",
"sv",
"base_model:NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2",
"base_model:quantized:NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2",
"region:us",
"conversational"
] |
text-generation
| 2024-01-19T22:11:44Z |
---
base_model: NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2
inference: false
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
model_creator: NickyNicky
model_name: dolphin-2_6-phi-2_oasst2_chatML_V2
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF
Quantized GGUF model files for [dolphin-2_6-phi-2_oasst2_chatML_V2](https://huggingface.co/NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2) from [NickyNicky](https://huggingface.co/NickyNicky)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.fp16.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.fp16.gguf) | fp16 | 5.56 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q2_k.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q2_k.gguf) | q2_k | 1.09 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q3_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q3_k_m.gguf) | q3_k_m | 1.49 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q4_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q4_k_m.gguf) | q4_k_m | 1.79 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q5_k_m.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q5_k_m.gguf) | q5_k_m | 2.07 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q6_k.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q6_k.gguf) | q6_k | 2.29 GB |
| [dolphin-2_6-phi-2_oasst2_chatml_v2.q8_0.gguf](https://huggingface.co/afrideva/dolphin-2_6-phi-2_oasst2_chatML_V2-GGUF/resolve/main/dolphin-2_6-phi-2_oasst2_chatml_v2.q8_0.gguf) | q8_0 | 2.96 GB |
## Original Model Card:
```
- model fine tune base: cognitivecomputations/dolphin-2_6-phi-2
- sft
- flash-attention 2
- loss: 0.85
- steps: 3000
- max_length: 2028
- neftune_noise_alpha: 5
```

Install packages
```Python
!python -m pip install --upgrade pip
!pip install -q datasets trl peft bitsandbytes sentencepiece wandb
!pip install -q accelerate safetensors deepspeed
!pip install -q scipy
!export CUDA_HOME=/usr/local/cuda-11.8
# !pip install ninja
!pip install ninja packaging --upgrade -qqq
!MAX_JOBS=4 pip install flash-attn --no-build-isolation -qqq
!pip install git+"https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary" -qqq
!python -m pip install optimum -qqq
```
Ioad model and generate text
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
# from attention_sinks import AutoModelForCausalLM
import torch
model_id = "NickyNicky/dolphin-2_6-phi-2_oasst2_chatML_V2"
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
load_in_4bit=True,
low_cpu_mem_usage= True,
flash_attn=True,
flash_rotary=True,
fused_dense=True,
)
max_length=2028
print("max_length",max_length)
tokenizer = AutoTokenizer.from_pretrained(model_id,
use_fast = True,
max_length=max_length,
trust_remote_code=True,)
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
tengo hambre que me recomiendas<|im_end|>
<|im_start|>assistant"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=45,
repetition_penalty=1.15, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
# bos_token_id=tokenizer.eos_token_id,
# use_cache=True,
# stopping_criteria= StoppingCriteriaList([stopping_criteria]),
)
outputs = model.generate(generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
'''OUTPUT:
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
tengo hambre que me recomiendas<|im_end|>
<|im_start|>assistant
Hay muchos tipos de alimentos que puedes probar si tienes hambre, pero aquí te muestro una lista ordenada por calor:
1- Frutas y verduras: Estas son buena opción para quitar hambre sin agotar tu cuerpo con grandes cantidades de carbohidratos. Algunas frutas y verduras que podrían ser suficientemente altas en calor durante el día incluyen tomates, plátanos, espinacas, papas, nueces, manzanas, limones, guisantes, cucumbers, zanahorias, etc.
2- Proteínas: Estas son importantes para mantener tu masa muscular y fuerzosa durante el día. Algunas proteínas que podrían ser útiles para quitar hambre durante el día incluyen carne, aceite de oliva, miel, yogur, leche fresca o sopa de gorditas, etc.
3- Carbohidratos: Estas son importantes para energizarte durante el día y mantenerte físico. Algunas frutas y verduras que podrían ser útiles para quitar hambre durante el día incluyen pan, tortillas, roti, arroz, pasta, rice, polenta, cereales, granola, etc.
4- Grains: Estas son importantes para mantenerte satiente durante el día y reducir la frecuencia de comidas rápida. Algunas gromas que podrían ser útiles para quitar hambre durante el día incluyen lentejas, farinas, tortilla, ensalada, etc.
5- Nuts y semolina: Estas son buenas opciones para quitar hambre durante el día sin agotar tu cuerpo con grandes cantidades de azúcar. Algunas frutas y verduras que podrían ser útiles para quitar hambre durante el día incluyen anacardios, almendras, macetas, bocaditos, panquesado, etc.
6- Papel picado: Esta es una opción deliciosa y económica que puedes preparar en caso de quitar hambre durante el día. Para hacer papel picado, primero cortezamos las frutas y verduras que deseas usarlas, y luego cortezamos las frutas y verduras que no deseas usarlas. A continuación, cortezamos las frutas y verduras que deseas usarlas más grandes y que estén más frescas, y luego cortezamos las frutas y verduras
'''
```
|
corbt/example-mistral-lora
|
corbt
| 2024-01-19T22:05:32Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:quantized:OpenPipe/mistral-ft-optimized-1227",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-19T22:04:27Z |
---
license: apache-2.0
base_model: OpenPipe/mistral-ft-optimized-1227
tags:
- generated_from_trainer
model-index:
- name: models/loras2/7bdb17d0-3f6b-4921-93db-0f46c4d9d81b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# models/loras2/7bdb17d0-3f6b-4921-93db-0f46c4d9d81b
This model is a fine-tuned version of [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4795 | 0.02 | 1 | 0.4746 |
| 0.0282 | 0.2 | 12 | 0.0309 |
| 0.0168 | 0.4 | 24 | 0.0242 |
| 0.0216 | 0.59 | 36 | 0.0208 |
| 0.0167 | 0.79 | 48 | 0.0189 |
| 0.0157 | 0.99 | 60 | 0.0186 |
| 0.0156 | 1.19 | 72 | 0.0177 |
| 0.0135 | 1.38 | 84 | 0.0182 |
| 0.0139 | 1.58 | 96 | 0.0178 |
| 0.0169 | 1.78 | 108 | 0.0178 |
| 0.0111 | 1.98 | 120 | 0.0179 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
timuryun/autotrain-xr1bw-vrs40
|
timuryun
| 2024-01-19T21:57:56Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T21:57:52Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
karawalla/shiptraining2024001
|
karawalla
| 2024-01-19T21:41:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T21:41:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
|
andrijdavid
| 2024-01-19T21:08:30Z | 47 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"GGUF",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T20:58:33Z |
---
language:
- en
license: apache-2.0
tags:
- GGUF
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
quantized_by: andrijdavid
---
# TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
- Original model: [TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
<!-- description start -->
## Description
This repo contains GGUF format model files for [TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF and below it, a specific filename to download, such as: TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./TinyLlama-1.1B-intermediate-step-1431k-3T-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: TinyLlama-1.1B-intermediate-step-1431k-3T
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
| - | | - | -- | -- | ----- |
| Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80 | 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11 |
| TinyLlama-1.1B-intermediate-step-240k-503b | 503B | 49.56 | 31.40 | 55.80 | 26.54 | 48.32 | 56.91 | 69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| TinyLlama-1.1B-intermediate-step-1195k-2.5T | 2.5T | 58.96 | 34.40 | 58.72 | 31.91 | 56.78 | 63.21 | 73.07 | 53.86 |
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
<!-- original-model-card end -->
|
rheubanks/llama2_instruct_generation
|
rheubanks
| 2024-01-19T21:06:05Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-19T21:05:41Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: NousResearch/Llama-2-7b-hf
model-index:
- name: llama2_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_instruct_generation
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9724 | 0.0 | 20 | 1.8100 |
| 1.8173 | 0.01 | 40 | 1.7801 |
| 1.8184 | 0.01 | 60 | 1.7671 |
| 1.8725 | 0.01 | 80 | 1.7568 |
| 1.8967 | 0.01 | 100 | 1.7460 |
| 1.8943 | 0.02 | 120 | 1.7172 |
| 1.788 | 0.02 | 140 | 1.7045 |
| 1.8953 | 0.02 | 160 | 1.6986 |
| 1.8262 | 0.02 | 180 | 1.6943 |
| 1.8472 | 0.03 | 200 | 1.6926 |
| 1.8416 | 0.03 | 220 | 1.6896 |
| 1.838 | 0.03 | 240 | 1.6855 |
| 1.7743 | 0.04 | 260 | 1.6806 |
| 1.8562 | 0.04 | 280 | 1.6785 |
| 1.8562 | 0.04 | 300 | 1.6794 |
| 1.8117 | 0.04 | 320 | 1.6783 |
| 1.8193 | 0.05 | 340 | 1.6768 |
| 1.8807 | 0.05 | 360 | 1.6745 |
| 1.7641 | 0.05 | 380 | 1.6738 |
| 1.7738 | 0.05 | 400 | 1.6735 |
| 1.7759 | 0.06 | 420 | 1.6733 |
| 1.7089 | 0.06 | 440 | 1.6721 |
| 1.7984 | 0.06 | 460 | 1.6706 |
| 1.7243 | 0.07 | 480 | 1.6720 |
| 1.9205 | 0.07 | 500 | 1.6705 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Ghunghru/xmod-base
|
Ghunghru
| 2024-01-19T20:31:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xmod",
"text-classification",
"generated_from_trainer",
"base_model:facebook/xmod-base",
"base_model:finetune:facebook/xmod-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T19:50:43Z |
---
license: mit
base_model: facebook/xmod-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xmod-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xmod-base
This model is a fine-tuned version of [facebook/xmod-base](https://huggingface.co/facebook/xmod-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5756
- F1: 0.4000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6685 | 1.0 | 189 | 0.6350 | 0.0 |
| 0.6631 | 2.0 | 378 | 0.6223 | 0.0 |
| 0.6368 | 3.0 | 567 | 0.6064 | 0.0 |
| 0.6075 | 4.0 | 756 | 0.5928 | 0.0 |
| 0.6102 | 5.0 | 945 | 0.5549 | 0.3729 |
| 0.5635 | 6.0 | 1134 | 0.6121 | 0.2727 |
| 0.5783 | 7.0 | 1323 | 0.5595 | 0.4118 |
| 0.5206 | 8.0 | 1512 | 0.5852 | 0.4068 |
| 0.5619 | 9.0 | 1701 | 0.5778 | 0.4000 |
| 0.5518 | 10.0 | 1890 | 0.5756 | 0.4000 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ayratmsk/distilbert-base-uncased-finetuned-emotion
|
ayratmsk
| 2024-01-19T20:27:45Z | 89 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T15:40:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
- name: F1
type: f1
value: 0.9187183032682423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.9185
- F1: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8162 | 1.0 | 250 | 0.3287 | 0.9015 | 0.9002 |
| 0.2514 | 2.0 | 500 | 0.2206 | 0.9185 | 0.9187 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Lerik/cat_vs_dog_recognition
|
Lerik
| 2024-01-19T20:23:08Z | 0 | 0 |
fastai
|
[
"fastai",
"image-classification",
"en",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-01-18T20:53:58Z |
---
license: apache-2.0
language:
- en
library_name: fastai
pipeline_tag: image-classification
---
|
mitro99/whisper-tiny-polyai-enUS_fewer_epochs
|
mitro99
| 2024-01-19T20:16:26Z | 60 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-19T20:03:49Z |
---
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-polyai-enUS_fewer_epochs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
metrics:
- name: Wer
type: wer
value: 0.34946871310507677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-polyai-enUS_fewer_epochs
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6145
- Wer Ortho: 0.3800
- Wer: 0.3495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 2.9576 | 3.33 | 50 | 1.9424 | 0.5077 | 0.4050 |
| 0.5132 | 6.67 | 100 | 0.6382 | 0.4152 | 0.3684 |
| 0.2569 | 10.0 | 150 | 0.5925 | 0.3893 | 0.3554 |
| 0.0973 | 13.33 | 200 | 0.6145 | 0.3800 | 0.3495 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sanduntg/mistral_instruct_generation
|
sanduntg
| 2024-01-19T19:57:33Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T19:08:22Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: mistral_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
castorini/rank_zephyr_7b_v1_full
|
castorini
| 2024-01-19T19:54:29Z | 2,210 | 20 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"en",
"arxiv:2312.02724",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:52:58Z |
---
tags:
- generated_from_trainer
license: mit
language:
- en
base_model: mistralai/Mistral-7B-v0.1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/castorini/rank_zephyr_7b_v1_full/resolve/main/thumbnail.jpeg" alt="RankZephyr Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- <img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> -->
# Model Card for RankZephyr 7B V1 - Full
RankZephyr is a series of language models trained to act as helpful reranking assistants built on the Zephyr-7B-β model.
RankZephyr Base is the model that follows single-stage fine-tuning on the RankGPT-3.5 model, while RankZephyr Full is the model that is further fine-tuned on RankGPT-4 reorderings of OpenAI's Ada2 orderings for 5K queries.
## Model description
- **Model type:** A 7B parameter GPT-like model initially fine-tuned on a mix of publicly available, synthetic datasets, followed by task-specific listwise reranking data.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Fine-tuned from model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/castorini/rank_llm
- **Paper:** https://arxiv.org/abs/2312.02724
## Effectiveness
At the time of release, RankZephyr-7B-Full is the state-of-the-art open-source reranking model on various datasets like DL19/20/21/22 and TREC-COVID and TREC-News.
With the MS MARCO v1 collection:
| Model | Size | First Stage | DL19 | DL20|
|-------------|-----|----|---------------|--------------|
| **RankZephyr-7b-v1-full-rho** 🪁 | **7B** | **SPLADE++ ED** | **0.7855** | **0.8255** |
| **RankZephyr-7b-v1-full** 🪁 | **7B** | **SPLADE++ ED** | **0.7803** | **0.8211** |
| RankGPT-4 (PSC) | -| SPLADE++ ED | 0.7601 | 0.7514 |
| RankGPT-4 | -| SPLADE++ ED | 0.7464 | 0.7076 |
| **RankZephyr-7b-v1-base** 🪁 | **7B** | **SPLADE++ ED** | **0.7341** | **0.7213** |
| RankGPT-3.5 | -| SPLADE++ ED | 0.7504 | 0.7120|
More details can be found in the paper.
## Intended uses & limitations
The model is to be used in conjunction with the [RankLLM repository](https://github.com/castorini/rank_llm). While `rank-llm` exists as a PyPI package, we are currently in the early stages of development and encourage users to directly check install from source.
The original Zephyr model is trained for chat. In our case, RankZephyr is fine-tuned to act as a listwise reranking agent. You provide it with a query and documents and get back a reordered list of document identifiers.
## Bias, Risks, and Limitations
The following is an excerpt from the [Zephyr-7B-β model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md#bias-risks--limitations):
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
> Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
Our model is trained specifically on monolingual English data, effectiveness on multilingual sets is not guaranteed.
## Citation
If you find RankZephyr is useful in your work, please cite the following paper:
```
@ARTICLE{pradeep2023rankzephyr,
title = {{RankZephyr}: Effective and Robust Zero-Shot Listwise Reranking is a Breeze!},
author = {Ronak Pradeep and Sahel Sharifymoghaddam and Jimmy Lin},
year = {2023},
journal = {arXiv:2312.02724}
}
```
|
Asude/gpt2-256t-human_reward-neg-20
|
Asude
| 2024-01-19T19:42:59Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-01-19T19:42:36Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Asude//tmp/tmpbx52zlg9/Asude/gpt2-256t-human_reward-neg-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Asude//tmp/tmpbx52zlg9/Asude/gpt2-256t-human_reward-neg-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Asude//tmp/tmpbx52zlg9/Asude/gpt2-256t-human_reward-neg-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
SeanJIE250/chatLAW1.0
|
SeanJIE250
| 2024-01-19T19:35:45Z | 2 | 1 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-16T00:04:10Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
wenqiglantz/MistralTrinity-7B-slerp-dpo
|
wenqiglantz
| 2024-01-19T19:24:25Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"conversational",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:wenqiglantz/MistralTrinity-7B-slerp",
"base_model:finetune:wenqiglantz/MistralTrinity-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:07:41Z |
---
base_model: wenqiglantz/MistralTrinity-7B-slerp
tags:
- mistral
- instruct
- finetune
- chatml
- synthetic data
- distillation
- dpo
- rlhf
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
---
# MistralTrinity-7B-slerp-dpo
Inspired by @mlabonne's blog post [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac), this model was fine-tuned with DPO (Direct Preference Optimization) on base model `MistralTrinity-7B-slerp`, which is a merged model for `mistralai/Mistral-7B-Instruct-v0.2` and `jan-hq/trinity-v1`, using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset.
The code to train this model is available on [Google Colab](https://colab.research.google.com/github/wenqiglantz/llmops/blob/main/Fine_tune_MistralTrinity_7B_slerp_with_DPO.ipynb) and [GitHub](https://github.com/wenqiglantz/llmops/blob/main/Fine_tune_MistralTrinity_7B_slerp_with_DPO.ipynb).
It required an A100 GPU for over an hour.
Check out fine-tuning run details on [Weights & Biases](https://wandb.ai/wenqiglantz/huggingface/runs/sxbgd33f).
|
ntc-ai/SDXL-LoRA-slider.on-a-ship
|
ntc-ai
| 2024-01-19T19:22:16Z | 45 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-19T19:22:12Z |
---
language:
- en
thumbnail: "images/evaluate/on a ship.../on a ship_17_3.0.png"
widget:
- text: on a ship
output:
url: images/on a ship_17_3.0.png
- text: on a ship
output:
url: images/on a ship_19_3.0.png
- text: on a ship
output:
url: images/on a ship_20_3.0.png
- text: on a ship
output:
url: images/on a ship_21_3.0.png
- text: on a ship
output:
url: images/on a ship_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "on a ship"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - on a ship (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/on a ship_17_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_17_0.0.png" width=256 height=256 /> | <img src="images/on a ship_17_3.0.png" width=256 height=256 /> |
| <img src="images/on a ship_19_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_19_0.0.png" width=256 height=256 /> | <img src="images/on a ship_19_3.0.png" width=256 height=256 /> |
| <img src="images/on a ship_20_-3.0.png" width=256 height=256 /> | <img src="images/on a ship_20_0.0.png" width=256 height=256 /> | <img src="images/on a ship_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
on a ship
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.on-a-ship', weight_name='on a ship.safetensors', adapter_name="on a ship")
# Activate the LoRA
pipe.set_adapters(["on a ship"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, on a ship"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2
|
ewqr2130
| 2024-01-19T19:21:48Z | 1,376 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T19:00:27Z |
---
license: apache-2.0
---
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2---- 7k steps.
|
miguelcarv/resnet-50-text-detector
|
miguelcarv
| 2024-01-19T19:20:28Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-19T18:51:36Z |
# Model Card for ResNet-50 Text Detector
This model was trained with the intent to quickly classify whether or not an image contains legible text or not. It was trained as a binary classification problem on the COCO-Text dataset together with some images from LLaVAR. This came out to a total of ~70k images, where 50% of them had text and 50% of them had no legible text.
# Model Details
## How to Get Started with the Model
```python
from PIL import Image
import requests
from transformers import AutoImageProcessor, AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained(
"miguelcarv/resnet-50-text-detector",
)
processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50", do_resize=False)
url = "http://images.cocodataset.org/train2017/000000044520.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert('RGB').resize((256,256))
inputs = processor(image, return_tensors="pt").pixel_values
outputs = model(inputs)
logits_per_image = outputs.logits
probs = logits_per_image.softmax(dim=1)
print(probs)
# tensor([[0.1149, 0.8851]])
```
# Training Details
- Trained for three epochs
- Resolution: 256x256
- Learning rate: 5e-5
- Optimizer: AdamW
- Batch size: 64
- Trained with FP32
|
Makucas/Mistral-7B-Instruct-v0.2_08
|
Makucas
| 2024-01-19T19:20:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T18:26:17Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Mistral-7B-Instruct-v0.2_08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_08
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7263 | 0.17 | 20 | 1.6209 |
| 1.5225 | 0.34 | 40 | 1.5653 |
| 1.398 | 0.51 | 60 | 1.5336 |
| 1.5291 | 0.68 | 80 | 1.4972 |
| 1.5079 | 0.85 | 100 | 1.4544 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
senseable/MoMo-70B-lora-1.8.6-DPO-gguf
|
senseable
| 2024-01-19T19:05:39Z | 4 | 4 |
transformers
|
[
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T02:48:20Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- gguf
---
Split files need to be merged:
Windows:
`copy /B MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf-split-* MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf`
`copy /B MoMo-70B-lora-1.8.6-DPO-q6_k.gguf-split-* MoMo-70B-lora-1.8.6-DPO-q6_k.gguf`
Linux/Mac:
`cat MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf-split-* > MoMo-70B-lora-1.8.6-DPO-q5_k_m.gguf`
`cat MoMo-70B-lora-1.8.6-DPO-q6_k.gguf-split-* > MoMo-70B-lora-1.8.6-DPO-q6_k.gguf`
|
vicgalle/franken-Beagle-11B
|
vicgalle
| 2024-01-19T19:04:34Z | 58 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:finetune:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:51:17Z |
---
base_model:
- mlabonne/NeuralBeagle14-7B
tags:
- mergekit
- merge
license: apache-2.0
---
# franken-Beagle-11B

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 24]
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
|
frluquba/clasificador2-muchocine
|
frluquba
| 2024-01-19T19:00:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:GKLMIP/bert-khmer-base-uncased-tokenized",
"base_model:finetune:GKLMIP/bert-khmer-base-uncased-tokenized",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T18:59:46Z |
---
base_model: GKLMIP/bert-khmer-base-uncased-tokenized
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador2-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador2-muchocine
This model is a fine-tuned version of [GKLMIP/bert-khmer-base-uncased-tokenized](https://huggingface.co/GKLMIP/bert-khmer-base-uncased-tokenized) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5195
- Accuracy: 0.3313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 339 | 1.5292 | 0.3313 |
| 1.5525 | 2.0 | 678 | 1.5392 | 0.2057 |
| 1.5301 | 3.0 | 1017 | 1.5195 | 0.3313 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
|
ewqr2130
| 2024-01-19T18:59:13Z | 1,369 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:44:57Z |
---
license: apache-2.0
---
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
ewqr2130/alignment-handbook-zephyr-7b_ppo_5e7step_102
|
Ghunghru/Misinformation-Covid-bert-base-chinese
|
Ghunghru
| 2024-01-19T18:57:25Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-18T21:53:13Z |
---
base_model: bert-base-chinese
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Misinformation-Covid-bert-base-chinese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Misinformation-Covid-bert-base-chinese
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6165
- F1: 0.4706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6722 | 1.0 | 189 | 0.6155 | 0.0 |
| 0.6611 | 2.0 | 378 | 0.5880 | 0.2979 |
| 0.6133 | 3.0 | 567 | 0.5847 | 0.2727 |
| 0.6343 | 4.0 | 756 | 0.5573 | 0.4151 |
| 0.6557 | 5.0 | 945 | 0.5704 | 0.4444 |
| 0.5996 | 6.0 | 1134 | 0.6545 | 0.3750 |
| 0.6239 | 7.0 | 1323 | 0.6037 | 0.4407 |
| 0.6089 | 8.0 | 1512 | 0.6145 | 0.4590 |
| 0.555 | 9.0 | 1701 | 0.6273 | 0.4746 |
| 0.5281 | 10.0 | 1890 | 0.6165 | 0.4706 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Asude/gpt2-256t-human_reward-neg-10
|
Asude
| 2024-01-19T18:51:09Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-01-19T18:50:46Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Asude//tmp/tmpqo6y_q3r/Asude/gpt2-256t-human_reward-neg-10")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Asude//tmp/tmpqo6y_q3r/Asude/gpt2-256t-human_reward-neg-10")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Asude//tmp/tmpqo6y_q3r/Asude/gpt2-256t-human_reward-neg-10")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Ghunghru/Misinformation-Covid-xlm-roberta-base
|
Ghunghru
| 2024-01-19T18:50:15Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-12T13:42:13Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Misinformation-Covid-xlm-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Misinformation-Covid-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7194
- F1: 0.4333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6737 | 1.0 | 189 | 0.6662 | 0.0 |
| 0.7083 | 2.0 | 378 | 0.6540 | 0.0 |
| 0.7185 | 3.0 | 567 | 0.8346 | 0.0 |
| 0.7826 | 4.0 | 756 | 0.8685 | 0.0 |
| 0.8333 | 5.0 | 945 | 0.7939 | 0.0 |
| 0.7989 | 6.0 | 1134 | 0.8978 | 0.0 |
| 0.8009 | 7.0 | 1323 | 0.7276 | 0.3265 |
| 0.6824 | 8.0 | 1512 | 0.7733 | 0.3774 |
| 0.6979 | 9.0 | 1701 | 0.7327 | 0.4407 |
| 0.6963 | 10.0 | 1890 | 0.7194 | 0.4333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
frluquba/clasificador-muchocine
|
frluquba
| 2024-01-19T18:48:52Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:GKLMIP/bert-khmer-base-uncased-tokenized",
"base_model:finetune:GKLMIP/bert-khmer-base-uncased-tokenized",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T11:39:35Z |
---
base_model: GKLMIP/bert-khmer-base-uncased-tokenized
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [GKLMIP/bert-khmer-base-uncased-tokenized](https://huggingface.co/GKLMIP/bert-khmer-base-uncased-tokenized) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4228
- Accuracy: 0.3959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 339 | 1.5194 | 0.3313 |
| 1.5424 | 2.0 | 678 | 1.4436 | 0.3589 |
| 1.3262 | 3.0 | 1017 | 1.4228 | 0.3959 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
webpolis/zenos-gpt-j-6B-instruct-4bit
|
webpolis
| 2024-01-19T18:44:12Z | 150 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-09-25T02:19:51Z |
---
{}
---
# Zenos GPT-J 6B Instruct 4-bit
## Model Overview
- **Name:** zenos-gpt-j-6B-instruct-4bit
- **Datasets Used:** [Alpaca Spanish](https://huggingface.co/datasets/bertin-project/alpaca-spanish), [Evol Instruct](https://huggingface.co/datasets/FreedomIntelligence/evol-instruct-spanish)
- **Architecture:** GPT-J
- **Model Size:** 6 Billion parameters
- **Precision:** 4 bits
- **Fine-tuning:** This model was fine-tuned using Low-Rank Adaptation (LoRa).
- **Content Moderation:** This model is not moderated.
## Description
Zenos GPT-J 6B Instruct 4-bit is a Spanish Instruction capable model based on the GPT-J architecture with 6 billion parameters. It has been fine-tuned on the Alpaca Spanish and Evol Instruct datasets, making it particularly suitable for natural language understanding and generation tasks in Spanish.
An experimental Twitter (**X**) bot is available at [https://twitter.com/ZenosBot](https://twitter.com/ZenosBot) which makes comments on news published in media outlets from Argentina.
### Requirements
The latest development version of Transformers, which includes serialization of 4 bits models.
- [Transformers](https://huggingface.co/docs/transformers/installation#install-from-source)
- Bitsandbytes >= 0.41.3
Since this is a compressed version (4 bits), it can fit into ~7GB of VRAM.
## Usage
You can use this model for various natural language processing tasks such as text generation, summarization, and more. Below is an example of how to use it in Python with the Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("webpolis/zenos-gpt-j-6B-instruct-4bit")
model = AutoModelForCausalLM.from_pretrained(
"webpolis/zenos-gpt-j-6B-instruct-4bit",
use_safetensors=True
)
user_msg = '''Escribe un poema breve utilizando los siguientes conceptos:
Bienestar, Corriente, Iluminación, Sed'''
# Generate text; watch out the padding between [INST] ... [/INST]
prompt = f'[INST] {user_msg} [/INST]'
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(model.device)
attention_mask = inputs["attention_mask"].to(model.device)
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.8,
top_k=40,
num_beams=1,
repetition_penalty=1.3,
do_sample=True
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
pad_token_id=tokenizer.eos_token_id,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=False,
max_new_tokens=512,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
start_txt = output.find('[/INST]') + len('[/INST]')
end_txt = output.find("<|endoftext|>", start_txt)
answer = output[start_txt:end_txt]
print(answer)
```
# Inference
## Online
Currently, the HuggingFace's Inference Tool UI doesn't properly load the model. However, you can use it with regular Python code as shown above once you meet the [requirements](#requirements).
## CPU
Best performance can be achieved downloading the [GGML 4 bits](https://huggingface.co/webpolis/zenos-gpt-j-6B-instruct-4bit/resolve/main/ggml-f16-q4_0.bin) model and doing inference using the [rustformers' llm](https://github.com/rustformers/llm) tool.
### Requirements
For optimal performance:
- 4 CPU cores
- 8GB RAM
In my Core i7 laptop it goes around 250ms per token:

# Acknowledgments
This model was developed by [Nicolás Iglesias](mailto:nfiglesias@gmail.com) using the Hugging Face Transformers library.
# LICENSE
Copyright 2023 [Nicolás Iglesias](mailto:nfiglesias@gmail.com)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this software except in compliance with the License.
You may obtain a copy of the License at
[Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
cocoirun/AIFT-Yi-Ko-6B-instruct-v0.4.15-dpo
|
cocoirun
| 2024-01-19T18:43:41Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:31:05Z |
---
license: cc-by-sa-4.0
---
<h1>instruct 모델 v0.4.15</h1>
<b><학습 데이터 구축></b>
Open-Orca-ko 데이터를 분석하여 태스크를 추출한 뒤
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로 구축 = aift-orca-v0.4
약 4만건(역사, 과학, 수학, 기계독해, 리뷰 분석) 구축하였고,
그 외에 Open-Orca-Ko에서 데이터를 일부 필터링하여 정제해거나 KoBEST 데이터를 함께 추가하였습니다.
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터를 파파고를 통해 번역 및 오역된 부분을 사람이 직접 수정 하는 작업을 수행
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 11만개의 학습데이터로 sft를 진행하였습니다.
<br>
현재, 새로운 버전의 모델 학습 및 성능을 위해 Open-Orca 데이터셋 일부를 번역하여 정제 중에 있습니다.
<br>
+ 고등학교 역사 문제 및 TruthfulQA 관련 문제 추가를 진행하였습니다.
+ 각종 it 지식 데이터 추가진행.
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
awilliamson/phrankened
|
awilliamson
| 2024-01-19T18:41:50Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"microsoft/phi-2",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:39:25Z |
---
tags:
- merge
- mergekit
- lazymergekit
- microsoft/phi-2
- microsoft/phi-2
base_model:
- microsoft/phi-2
- microsoft/phi-2
---
# phrankened
phrankened is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
* [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: "microsoft/phi-2"
layer_range: [0, 12]
- sources:
- model: "microsoft/phi-2"
layer_range: [10, 22]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "awilliamson/phrankened"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mattshumer/QuadPhi
|
mattshumer
| 2024-01-19T18:33:33Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mattshumer/ThinkPhi",
"mattshumer/TalkPhi",
"conversational",
"custom_code",
"base_model:mattshumer/TalkPhi",
"base_model:merge:mattshumer/TalkPhi",
"base_model:mattshumer/ThinkPhi",
"base_model:merge:mattshumer/ThinkPhi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:28:36Z |
---
tags:
- merge
- mergekit
- lazymergekit
- mattshumer/ThinkPhi
- mattshumer/TalkPhi
base_model:
- mattshumer/ThinkPhi
- mattshumer/TalkPhi
---
# QuadPhi
QuadPhi is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mattshumer/ThinkPhi](https://huggingface.co/mattshumer/ThinkPhi)
* [mattshumer/TalkPhi](https://huggingface.co/mattshumer/TalkPhi)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mattshumer/ThinkPhi
layer_range: [0, 64]
- sources:
- model: mattshumer/TalkPhi
layer_range: [0, 64]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mattshumer/QuadPhi"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Its7up/Disco-Elysium-Narrator-RVC-v2
|
Its7up
| 2024-01-19T18:30:54Z | 0 | 8 | null |
[
"rvc",
"audio-to-audio",
"en",
"region:us"
] |
audio-to-audio
| 2024-01-19T12:45:59Z |
---
language:
- en
pipeline_tag: audio-to-audio
tags:
- rvc
---
An RVC v2 AI voice model trained on the Disco Elysium narrator Lenval Brown's voice from the game. Trained on 300 epochs with more than an hour of 10-second clips extracted from the video game. Clips were taken from any point in the story, included alternative clips hidden in game files, and there was no discrimination between different skills.
Some famous quotes read by this RVC AI voice:
<audio preload="auto" controls src="https://cdn-uploads.huggingface.co/production/uploads/65aa6badd2adc31ee3d6fc15/DLx3yhIfQcw5vmYRZnU-i.wav"></audio>
<audio preload="auto" controls src="https://cdn-uploads.huggingface.co/production/uploads/65aa6badd2adc31ee3d6fc15/VHSDS_gnebnXe3nxG44lT.wav"></audio>
<audio preload="auto" controls src="https://cdn-uploads.huggingface.co/production/uploads/65aa6badd2adc31ee3d6fc15/OZ6wtbuYw5KJOUr_Bdjcv.wav"></audio>
To use:
- Drag the de_narrator.pth file to the assets/weights folder
- Drag the added_IVF256_Flat_nprobe_1_de_narrator_v2.index file to the logs/de_narrator folder. If the logs/de_narrator folder does not exist, then create it.
|
cremabelleza/coralift
|
cremabelleza
| 2024-01-19T18:21:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-19T18:17:13Z |
<title>Coralift Crema Antiarrugas: Juventud y Belleza en tu Piel</title>
<h1>Coralift Crema Antiarrugas: Juventud y Belleza en tu Piel</h1>
Si estás buscando rejuvenecer y cuidar tu piel, Coralift Crema Antiarrugas es tu solución ideal. Esta crema, disponible exclusivamente en <a href="http://es-keto-black.exclusive-goods.org/?alstream=u9Rk&sub_id=hug"><b>>>>www.coralift.es<<<</b></a>, está diseñada para ofrecer resultados visibles y efectivos en la lucha contra las arrugas.
<a href="http://es-keto-black.exclusive-goods.org/?alstream=u9Rk&sub_id=hug"><b>>>>IR AL SITIO WEB OFICIAL AQUÍ<<<</b></a>
A un precio de 49 EUR, Coralift proporciona una fórmula avanzada enriquecida con ingredientes activos que promueven la elasticidad y firmeza de la piel. Es perfecta para quienes buscan un tratamiento eficaz para reducir los signos del envejecimiento, mejorando la textura y apariencia general de la piel.
Visita es-m-coralift.quality-goods.org y haz tu pedido hoy. Incorporar Coralift en tu rutina de cuidado de la piel puede marcar una gran diferencia, proporcionándote una piel más joven, radiante y saludable. No dejes pasar la oportunidad de darle a tu piel el cuidado que se merece con esta crema antiarrugas de alta calidad. ¡Coralift es tu aliado para una belleza duradera y natural!
|
gizmo-ai/distilbert-multilingual-nli-stsb-quora-ranking
|
gizmo-ai
| 2024-01-19T18:14:23Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-19T18:14:22Z |
---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking')
model = AutoModel.from_pretrained('sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
cocoirun/AIFT-Yi-Ko-6B-instruct-v0.4.15
|
cocoirun
| 2024-01-19T18:12:42Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:57:22Z |
---
license: cc-by-sa-4.0
---
<h1>instruct 모델 v0.4.15</h1>
<b><학습 데이터 구축></b>
Open-Orca-ko 데이터를 분석하여 태스크를 추출한 뒤
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로 구축 = aift-orca-v0.4
약 4만건(역사, 과학, 수학, 기계독해, 리뷰 분석) 구축하였고,
그 외에 Open-Orca-Ko에서 데이터를 일부 필터링하여 정제해거나 KoBEST 데이터를 함께 추가하였습니다.
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터를 파파고를 통해 번역 및 오역된 부분을 사람이 직접 수정 하는 작업을 수행
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 11만개의 학습데이터로 sft를 진행하였습니다.
<br>
현재, 새로운 버전의 모델 학습 및 성능을 위해 Open-Orca 데이터셋 일부를 번역하여 정제 중에 있습니다.
<br>
+ 고등학교 역사 문제 및 TruthfulQA 관련 문제 추가를 진행하였습니다.
+ 각종 it 지식 데이터 추가진행.
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
kavyasasikumar07/my_awesome_model
|
kavyasasikumar07
| 2024-01-19T18:10:58Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T16:11:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: kavyasasikumar07/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kavyasasikumar07/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0658
- Validation Loss: 0.2273
- Train Accuracy: 0.9266
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2531 | 0.1844 | 0.9284 | 0 |
| 0.1316 | 0.1906 | 0.9318 | 1 |
| 0.0658 | 0.2273 | 0.9266 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
anantg/Mixtral-Finetune-Output
|
anantg
| 2024-01-19T18:05:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T18:05:03Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: Mixtral-Finetune-Output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral-Finetune-Output
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5745 | 0.01 | 10 | 1.4701 |
| 1.5372 | 0.02 | 20 | 1.4541 |
| 1.4147 | 0.03 | 30 | 1.4433 |
| 1.423 | 0.04 | 40 | 1.4366 |
| 1.5318 | 0.05 | 50 | 1.4326 |
| 1.334 | 0.06 | 60 | 1.4296 |
| 1.364 | 0.07 | 70 | 1.4244 |
| 1.332 | 0.08 | 80 | 1.4194 |
| 1.3742 | 0.09 | 90 | 1.4163 |
| 1.4497 | 0.1 | 100 | 1.4124 |
| 1.4145 | 0.1 | 110 | 1.4098 |
| 1.4224 | 0.11 | 120 | 1.4050 |
| 1.4013 | 0.12 | 130 | 1.4017 |
| 1.547 | 0.13 | 140 | 1.4020 |
| 1.4969 | 0.14 | 150 | 1.3967 |
| 1.5716 | 0.15 | 160 | 1.3943 |
| 1.3677 | 0.16 | 170 | 1.3915 |
| 1.3789 | 0.17 | 180 | 1.3901 |
| 1.3188 | 0.18 | 190 | 1.3869 |
| 1.3317 | 0.19 | 200 | 1.3846 |
| 1.2552 | 0.2 | 210 | 1.3809 |
| 1.2584 | 0.21 | 220 | 1.3788 |
| 1.3958 | 0.22 | 230 | 1.3776 |
| 1.3345 | 0.23 | 240 | 1.3755 |
| 1.3562 | 0.24 | 250 | 1.3723 |
| 1.343 | 0.25 | 260 | 1.3726 |
| 1.3705 | 0.26 | 270 | 1.3695 |
| 1.5719 | 0.27 | 280 | 1.3687 |
| 1.3634 | 0.28 | 290 | 1.3652 |
| 1.4465 | 0.29 | 300 | 1.3668 |
| 1.3949 | 0.29 | 310 | 1.3642 |
| 1.3147 | 0.3 | 320 | 1.3631 |
| 1.368 | 0.31 | 330 | 1.3613 |
| 1.3482 | 0.32 | 340 | 1.3603 |
| 1.3143 | 0.33 | 350 | 1.3591 |
| 1.4717 | 0.34 | 360 | 1.3568 |
| 1.2089 | 0.35 | 370 | 1.3555 |
| 1.4223 | 0.36 | 380 | 1.3529 |
| 1.3895 | 0.37 | 390 | 1.3523 |
| 1.309 | 0.38 | 400 | 1.3504 |
| 1.3698 | 0.39 | 410 | 1.3487 |
| 1.2834 | 0.4 | 420 | 1.3468 |
| 1.2747 | 0.41 | 430 | 1.3471 |
| 1.3167 | 0.42 | 440 | 1.3460 |
| 1.3232 | 0.43 | 450 | 1.3438 |
| 1.3628 | 0.44 | 460 | 1.3422 |
| 1.3828 | 0.45 | 470 | 1.3417 |
| 1.3756 | 0.46 | 480 | 1.3412 |
| 1.385 | 0.47 | 490 | 1.3418 |
| 1.3622 | 0.48 | 500 | 1.3392 |
| 1.3322 | 0.49 | 510 | 1.3381 |
| 1.368 | 0.49 | 520 | 1.3365 |
| 1.3373 | 0.5 | 530 | 1.3355 |
| 1.4931 | 0.51 | 540 | 1.3354 |
| 1.3986 | 0.52 | 550 | 1.3333 |
| 1.3053 | 0.53 | 560 | 1.3312 |
| 1.2736 | 0.54 | 570 | 1.3297 |
| 1.2903 | 0.55 | 580 | 1.3298 |
| 1.328 | 0.56 | 590 | 1.3290 |
| 1.4081 | 0.57 | 600 | 1.3290 |
| 1.2852 | 0.58 | 610 | 1.3279 |
| 1.3636 | 0.59 | 620 | 1.3268 |
| 1.3448 | 0.6 | 630 | 1.3265 |
| 1.2061 | 0.61 | 640 | 1.3252 |
| 1.3519 | 0.62 | 650 | 1.3244 |
| 1.3632 | 0.63 | 660 | 1.3248 |
| 1.3784 | 0.64 | 670 | 1.3238 |
| 1.3349 | 0.65 | 680 | 1.3216 |
| 1.2603 | 0.66 | 690 | 1.3215 |
| 1.3566 | 0.67 | 700 | 1.3224 |
| 1.316 | 0.68 | 710 | 1.3208 |
| 1.1818 | 0.69 | 720 | 1.3203 |
| 1.3631 | 0.69 | 730 | 1.3190 |
| 1.3234 | 0.7 | 740 | 1.3184 |
| 1.2759 | 0.71 | 750 | 1.3177 |
| 1.3332 | 0.72 | 760 | 1.3177 |
| 1.2764 | 0.73 | 770 | 1.3165 |
| 1.2056 | 0.74 | 780 | 1.3155 |
| 1.4285 | 0.75 | 790 | 1.3158 |
| 1.3733 | 0.76 | 800 | 1.3150 |
| 1.2735 | 0.77 | 810 | 1.3143 |
| 1.3502 | 0.78 | 820 | 1.3137 |
| 1.093 | 0.79 | 830 | 1.3130 |
| 1.3451 | 0.8 | 840 | 1.3123 |
| 1.2942 | 0.81 | 850 | 1.3119 |
| 1.3258 | 0.82 | 860 | 1.3117 |
| 1.2139 | 0.83 | 870 | 1.3114 |
| 1.2773 | 0.84 | 880 | 1.3109 |
| 1.2324 | 0.85 | 890 | 1.3101 |
| 1.4134 | 0.86 | 900 | 1.3097 |
| 1.3464 | 0.87 | 910 | 1.3095 |
| 1.2972 | 0.88 | 920 | 1.3090 |
| 1.3305 | 0.88 | 930 | 1.3086 |
| 1.3394 | 0.89 | 940 | 1.3082 |
| 1.3666 | 0.9 | 950 | 1.3078 |
| 1.3703 | 0.91 | 960 | 1.3077 |
| 1.3019 | 0.92 | 970 | 1.3077 |
| 1.2618 | 0.93 | 980 | 1.3073 |
| 1.2808 | 0.94 | 990 | 1.3071 |
| 1.2927 | 0.95 | 1000 | 1.3069 |
| 1.2688 | 0.96 | 1010 | 1.3067 |
| 1.3312 | 0.97 | 1020 | 1.3065 |
| 1.2406 | 0.98 | 1030 | 1.3064 |
| 1.3341 | 0.99 | 1040 | 1.3062 |
| 1.3531 | 1.0 | 1050 | 1.3062 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mattshumer/TalkPhi
|
mattshumer
| 2024-01-19T18:02:40Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"cognitivecomputations/dolphin-2_6-phi-2",
"Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"conversational",
"custom_code",
"base_model:Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"base_model:merge:Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"base_model:cognitivecomputations/dolphin-2_6-phi-2",
"base_model:merge:cognitivecomputations/dolphin-2_6-phi-2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:57:51Z |
---
tags:
- merge
- mergekit
- lazymergekit
- cognitivecomputations/dolphin-2_6-phi-2
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
base_model:
- cognitivecomputations/dolphin-2_6-phi-2
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
---
# TalkPhi
TalkPhi is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
* [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: cognitivecomputations/dolphin-2_6-phi-2
layer_range: [0, 32]
- sources:
- model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
layer_range: [0, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mattshumer/TalkPhi"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
snar7/ooo_phrase_finetuned_squad
|
snar7
| 2024-01-19T18:02:21Z | 46 | 1 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"en",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-14T14:13:58Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_keras_callback
model-index:
- name: snar7/ooo_phrase
results: []
language:
- en
pipeline_tag: question-answering
widget:
- text: "What is the out office duration ?"
context: "Good morning, everyone! I'll be on vacation starting today until Friday, so please reach out to my colleagues for assistance."
example_title: "Question Answering"
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# snar7/ooo_phrase
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on a private dataset of out-of-office emails tagged with the exact phrase which contains the out-of-office context.
It achieves the following results on the evaluation set:
- Eval Loss (during training): 0.2761, Epochs : 3
- Jaccard Score on a test set of tagged out-of-office phrases: ~ 94%
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.5315 | 1 |
| 0.3629 | 2 |
| 0.2761 | 3 |
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.11.0
- Datasets 2.12.0
- Tokenizers 0.13.2
|
douglasrolins/bert-base-portuguese-cased_ft-multilple-choice-enem-sample
|
douglasrolins
| 2024-01-19T17:56:32Z | 89 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-01-19T15:02:18Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-portuguese-cased_ft-multilple-choice-enem-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased_ft-multilple-choice-enem-sample
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5998
- Accuracy: 0.4022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 346 | 1.3529 | 0.4457 |
| 1.3051 | 2.0 | 692 | 1.7823 | 0.4275 |
| 0.5312 | 3.0 | 1038 | 2.3728 | 0.3986 |
| 0.5312 | 4.0 | 1384 | 2.5998 | 0.4022 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
varun-v-rao/roberta-large-bn-adapter-3.17M-snli
|
varun-v-rao
| 2024-01-19T17:53:11Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"roberta",
"dataset:snli",
"region:us"
] | null | 2024-01-19T17:53:10Z |
---
tags:
- adapter-transformers
- roberta
datasets:
- snli
---
# Adapter `varun-v-rao/roberta-large-bn-adapter-3.17M-snli` for roberta-large
An [adapter](https://adapterhub.ml) for the `roberta-large` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-large")
adapter_name = model.load_adapter("varun-v-rao/roberta-large-bn-adapter-3.17M-snli", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
LoneStriker/WinterGoddess-1.4x-70B-L2-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T17:52:46Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:38:07Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
keerthibala/my_awesome_model
|
keerthibala
| 2024-01-19T17:52:05Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T15:37:33Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: keerthibala/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# keerthibala/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0653
- Validation Loss: 0.2081
- Train Accuracy: 0.9306
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2487 | 0.1829 | 0.9299 | 0 |
| 0.1315 | 0.2017 | 0.9293 | 1 |
| 0.0653 | 0.2081 | 0.9306 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli
|
varun-v-rao
| 2024-01-19T17:49:20Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"dataset:snli",
"region:us"
] | null | 2024-01-19T17:49:19Z |
---
tags:
- adapter-transformers
- bert
datasets:
- snli
---
# Adapter `varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli` for bert-large-cased
An [adapter](https://adapterhub.ml) for the `bert-large-cased` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-large-cased")
adapter_name = model.load_adapter("varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
laureanadcastro/clasificador-muchocine
|
laureanadcastro
| 2024-01-19T17:43:08Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"es",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T17:41:45Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4366
- Accuracy: 0.4323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3572 | 0.3781 |
| 1.4264 | 2.0 | 776 | 1.3545 | 0.4206 |
| 0.9992 | 3.0 | 1164 | 1.4366 | 0.4323 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
openmindrobotika/Taxi-v3
|
openmindrobotika
| 2024-01-19T17:41:20Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T17:41:18Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="openmindrobotika/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mattshumer/PentaPhi
|
mattshumer
| 2024-01-19T17:38:59Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rhysjones/phi-2-orange",
"lxuechen/phi-2-dpo",
"cognitivecomputations/dolphin-2_6-phi-2",
"Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"mrm8488/phi-2-coder",
"custom_code",
"base_model:Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"base_model:merge:Yhyu13/phi-2-sft-dpo-gpt4_en-ep1",
"base_model:cognitivecomputations/dolphin-2_6-phi-2",
"base_model:merge:cognitivecomputations/dolphin-2_6-phi-2",
"base_model:lxuechen/phi-2-dpo",
"base_model:merge:lxuechen/phi-2-dpo",
"base_model:mrm8488/phi-2-coder",
"base_model:merge:mrm8488/phi-2-coder",
"base_model:rhysjones/phi-2-orange",
"base_model:merge:rhysjones/phi-2-orange",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:33:59Z |
---
tags:
- merge
- mergekit
- lazymergekit
- rhysjones/phi-2-orange
- lxuechen/phi-2-dpo
- cognitivecomputations/dolphin-2_6-phi-2
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
- mrm8488/phi-2-coder
base_model:
- rhysjones/phi-2-orange
- lxuechen/phi-2-dpo
- cognitivecomputations/dolphin-2_6-phi-2
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
- mrm8488/phi-2-coder
---
# PentaPhi
PentaPhi is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
* [lxuechen/phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
* [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)
* [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: rhysjones/phi-2-orange
layer_range: [0, 32]
- sources:
- model: lxuechen/phi-2-dpo
layer_range: [0, 32]
- sources:
- model: cognitivecomputations/dolphin-2_6-phi-2
layer_range: [0, 32]
- sources:
- model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1
layer_range: [0, 32]
- sources:
- model: mrm8488/phi-2-coder
layer_range: [0, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mattshumer/PentaPhi"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
LoneStriker/WinterGoddess-1.4x-70B-L2-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-19T17:38:05Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:17:46Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
leveldevai/BeagleMist-7B
|
leveldevai
| 2024-01-19T17:34:37Z | 1,370 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.5",
"leveldevai/TurdusBeagle-7B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:26:36Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- EmbeddedLLM/Mistral-7B-Merge-14-v0.5
- leveldevai/TurdusBeagle-7B
---
# BeagleMist-7B
BeagleMist-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.5](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.5)
* [leveldevai/TurdusBeagle-7B](https://huggingface.co/leveldevai/TurdusBeagle-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.5
layer_range: [0, 32]
- model: leveldevai/TurdusBeagle-7B
layer_range: [0, 32]
merge_method: slerp
base_model: leveldevai/TurdusBeagle-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.45 # fallback for rest of tensors
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "leveldevai/BeagleMist-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
philschmid/CodeLlama-7b-hf
|
philschmid
| 2024-01-19T17:18:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:18:47Z |
---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base model of 7B parameters.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or it's [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
|
LoneStriker/WinterGoddess-1.4x-70B-L2-2.4bpw-h6-exl2
|
LoneStriker
| 2024-01-19T17:17:42Z | 7 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T17:00:13Z |
---
license: cc-by-nc-4.0
language:
- en
---
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
mcpotato/42
|
mcpotato
| 2024-01-19T17:06:35Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: mit
---
# 42 is the answer
Don't forget your towel !
## Panic
Don't
## Happiness
Is more important than being right
|
zenoverflow/madlad400-3b-mt-int8-float32
|
zenoverflow
| 2024-01-19T17:01:44Z | 20 | 3 |
transformers
|
[
"transformers",
"translation",
"license:apache-2.0",
"region:us"
] |
translation
| 2024-01-19T16:19:18Z |
---
license: apache-2.0
pipeline_tag: translation
inference: false
---
Quantization of [madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt) using [Ctranslate2](https://github.com/OpenNMT/CTranslate2) for running on CPU.
Example usage:
```python
import ctranslate2, transformers
from huggingface_hub import snapshot_download
model_path = snapshot_download("zenoverflow/madlad400-3b-mt-int8-float32")
print("\n", end="")
translator = ctranslate2.Translator(model_path, device="cpu")
tokenizer = transformers.T5Tokenizer.from_pretrained(model_path)
target_lang_code = "ja"
source_text = "This sentence has no meaning."
input_text = f"<2{target_lang_code}> {source_text}"
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
```
|
Sadik-Sikder/Itakhola_model
|
Sadik-Sikder
| 2024-01-19T17:01:29Z | 19 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"Landscape",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-19T16:59:27Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- Landscape
widget:
- text: a photo of Heritage_of_comilla landscape in the Earth
---
# DreamBooth model for the Heritage_of_comilla concept trained by Sadik-Sikder on the Sadik-Sikder/itakhola dataset.
This is a Stable Diffusion model fine-tuned on the Heritage_of_comilla concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of Heritage_of_comilla landscape**
## Description
This is a Stable Diffusion model specialized on `landscape` images for the Landscape theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Sadik-Sikder/Itakhola_model')
image = pipeline().images[0]
image
```
|
LoneStriker/TenyxChat-8x7B-v1-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-19T16:59:51Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T16:43:03Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
```
|
brenomatos/xlm-roberta-base-finetuned-language-detection
|
brenomatos
| 2024-01-19T16:58:35Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T16:58:05Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-language-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-language-detection
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0431
- Accuracy: 0.9935
- F1: 0.9935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1377 | 1.0 | 1094 | 0.0431 | 0.9935 | 0.9935 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
imabanana/ppo-Huggy
|
imabanana
| 2024-01-19T16:50:38Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-19T16:50:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: imabanana/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Rishi-19/Profanity_Detection_Model_2
|
Rishi-19
| 2024-01-19T16:36:59Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T15:07:05Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Rishi-19/Profanity_Detection_Model_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rishi-19/Profanity_Detection_Model_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0259
- Validation Loss: 0.2283
- Train Accuracy: 0.9446
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18825, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1605 | 0.1339 | 0.9466 | 0 |
| 0.1013 | 0.1371 | 0.9456 | 1 |
| 0.0549 | 0.1868 | 0.9434 | 2 |
| 0.0259 | 0.2283 | 0.9446 | 3 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
homerquan/dqn-SpaceInvadersNoFrameskip-v4
|
homerquan
| 2024-01-19T16:18:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T16:17:51Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 627.00 +/- 152.17
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga homerquan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga homerquan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga homerquan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
st-karlos-efood/setfit-multilabel-example-classifier-chain-25iters
|
st-karlos-efood
| 2024-01-19T16:18:16Z | 4 | 0 |
setfit
|
[
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:ethos",
"arxiv:2209.11055",
"base_model:lighteternal/stsb-xlm-r-greek-transfer",
"base_model:finetune:lighteternal/stsb-xlm-r-greek-transfer",
"model-index",
"region:us"
] |
text-classification
| 2024-01-19T16:17:38Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- ethos
metrics:
- accuracy
widget:
- text: "This is what happens when you'\x80\x99re a bitch and a cunt. Looks like\
\ that guy called it correctly. Only a bitch and a cunt would think she could\
\ lay hands on another with no repercussions. Maybe after that seizure she won'\x80\
\x99t be as much of a bitchy cunt."
- text: What if we send every men to mars to start a new life there! They are useless
here (Of course milk them first to store their juices)
- text: Honestly, go fuck yourself! bitch!
- text: Hindus take my ass please
- text: Im going to choke you with your cross necklace idiotic religious pig
pipeline_tag: text-classification
inference: false
base_model: lighteternal/stsb-xlm-r-greek-transfer
model-index:
- name: SetFit with lighteternal/stsb-xlm-r-greek-transfer
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ethos
type: ethos
split: test
metrics:
- type: accuracy
value: 0.20533333333333334
name: Accuracy
---
# SetFit with lighteternal/stsb-xlm-r-greek-transfer
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [ethos](https://huggingface.co/datasets/ethos) dataset that can be used for Text Classification. This SetFit model uses [lighteternal/stsb-xlm-r-greek-transfer](https://huggingface.co/lighteternal/stsb-xlm-r-greek-transfer) as the Sentence Transformer embedding model. A ClassifierChain instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [lighteternal/stsb-xlm-r-greek-transfer](https://huggingface.co/lighteternal/stsb-xlm-r-greek-transfer)
- **Classification head:** a ClassifierChain instance
- **Maximum Sequence Length:** 400 tokens
<!-- - **Number of Classes:** Unknown -->
- **Training Dataset:** [ethos](https://huggingface.co/datasets/ethos)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.2053 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("st-karlos-efood/setfit-multilabel-example-classifier-chain-25iters")
# Run inference
preds = model("Hindus take my ass please")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.9307 | 61 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 25
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.2027 | - |
| 0.0305 | 50 | 0.2092 | - |
| 0.0609 | 100 | 0.1605 | - |
| 0.0914 | 150 | 0.1726 | - |
| 0.1219 | 200 | 0.1322 | - |
| 0.1523 | 250 | 0.1252 | - |
| 0.1828 | 300 | 0.1404 | - |
| 0.2133 | 350 | 0.0927 | - |
| 0.2438 | 400 | 0.1039 | - |
| 0.2742 | 450 | 0.0904 | - |
| 0.3047 | 500 | 0.1194 | - |
| 0.3352 | 550 | 0.1024 | - |
| 0.3656 | 600 | 0.151 | - |
| 0.3961 | 650 | 0.0842 | - |
| 0.4266 | 700 | 0.1158 | - |
| 0.4570 | 750 | 0.214 | - |
| 0.4875 | 800 | 0.1167 | - |
| 0.5180 | 850 | 0.1174 | - |
| 0.5484 | 900 | 0.1567 | - |
| 0.5789 | 950 | 0.0726 | - |
| 0.6094 | 1000 | 0.0741 | - |
| 0.6399 | 1050 | 0.0841 | - |
| 0.6703 | 1100 | 0.0606 | - |
| 0.7008 | 1150 | 0.1005 | - |
| 0.7313 | 1200 | 0.1236 | - |
| 0.7617 | 1250 | 0.141 | - |
| 0.7922 | 1300 | 0.1611 | - |
| 0.8227 | 1350 | 0.1068 | - |
| 0.8531 | 1400 | 0.0542 | - |
| 0.8836 | 1450 | 0.1635 | - |
| 0.9141 | 1500 | 0.106 | - |
| 0.9445 | 1550 | 0.0817 | - |
| 0.9750 | 1600 | 0.1157 | - |
| 1.0055 | 1650 | 0.1031 | - |
| 1.0360 | 1700 | 0.0969 | - |
| 1.0664 | 1750 | 0.0742 | - |
| 1.0969 | 1800 | 0.0697 | - |
| 1.1274 | 1850 | 0.1072 | - |
| 1.1578 | 1900 | 0.0593 | - |
| 1.1883 | 1950 | 0.1102 | - |
| 1.2188 | 2000 | 0.1586 | - |
| 1.2492 | 2050 | 0.1523 | - |
| 1.2797 | 2100 | 0.0921 | - |
| 1.3102 | 2150 | 0.0634 | - |
| 1.3406 | 2200 | 0.073 | - |
| 1.3711 | 2250 | 0.1131 | - |
| 1.4016 | 2300 | 0.0493 | - |
| 1.4321 | 2350 | 0.106 | - |
| 1.4625 | 2400 | 0.0585 | - |
| 1.4930 | 2450 | 0.1058 | - |
| 1.5235 | 2500 | 0.0892 | - |
| 1.5539 | 2550 | 0.0649 | - |
| 1.5844 | 2600 | 0.0481 | - |
| 1.6149 | 2650 | 0.1359 | - |
| 1.6453 | 2700 | 0.0734 | - |
| 1.6758 | 2750 | 0.0762 | - |
| 1.7063 | 2800 | 0.1082 | - |
| 1.7367 | 2850 | 0.1274 | - |
| 1.7672 | 2900 | 0.0724 | - |
| 1.7977 | 2950 | 0.0842 | - |
| 1.8282 | 3000 | 0.1558 | - |
| 1.8586 | 3050 | 0.071 | - |
| 1.8891 | 3100 | 0.1716 | - |
| 1.9196 | 3150 | 0.1078 | - |
| 1.9500 | 3200 | 0.1037 | - |
| 1.9805 | 3250 | 0.0773 | - |
| 2.0110 | 3300 | 0.0706 | - |
| 2.0414 | 3350 | 0.1577 | - |
| 2.0719 | 3400 | 0.0825 | - |
| 2.1024 | 3450 | 0.1227 | - |
| 2.1328 | 3500 | 0.1069 | - |
| 2.1633 | 3550 | 0.1037 | - |
| 2.1938 | 3600 | 0.0595 | - |
| 2.2243 | 3650 | 0.0569 | - |
| 2.2547 | 3700 | 0.0967 | - |
| 2.2852 | 3750 | 0.0632 | - |
| 2.3157 | 3800 | 0.1014 | - |
| 2.3461 | 3850 | 0.0868 | - |
| 2.3766 | 3900 | 0.0986 | - |
| 2.4071 | 3950 | 0.0585 | - |
| 2.4375 | 4000 | 0.063 | - |
| 2.4680 | 4050 | 0.1124 | - |
| 2.4985 | 4100 | 0.0444 | - |
| 2.5289 | 4150 | 0.1547 | - |
| 2.5594 | 4200 | 0.1087 | - |
| 2.5899 | 4250 | 0.0946 | - |
| 2.6204 | 4300 | 0.0261 | - |
| 2.6508 | 4350 | 0.0414 | - |
| 2.6813 | 4400 | 0.0715 | - |
| 2.7118 | 4450 | 0.0831 | - |
| 2.7422 | 4500 | 0.0779 | - |
| 2.7727 | 4550 | 0.1049 | - |
| 2.8032 | 4600 | 0.1224 | - |
| 2.8336 | 4650 | 0.0926 | - |
| 2.8641 | 4700 | 0.0745 | - |
| 2.8946 | 4750 | 0.0642 | - |
| 2.9250 | 4800 | 0.0536 | - |
| 2.9555 | 4850 | 0.1296 | - |
| 2.9860 | 4900 | 0.0596 | - |
| 3.0165 | 4950 | 0.0361 | - |
| 3.0469 | 5000 | 0.0592 | - |
| 3.0774 | 5050 | 0.0656 | - |
| 3.1079 | 5100 | 0.0584 | - |
| 3.1383 | 5150 | 0.0729 | - |
| 3.1688 | 5200 | 0.1037 | - |
| 3.1993 | 5250 | 0.0685 | - |
| 3.2297 | 5300 | 0.0511 | - |
| 3.2602 | 5350 | 0.0427 | - |
| 3.2907 | 5400 | 0.1067 | - |
| 3.3211 | 5450 | 0.0807 | - |
| 3.3516 | 5500 | 0.0815 | - |
| 3.3821 | 5550 | 0.1016 | - |
| 3.4126 | 5600 | 0.1034 | - |
| 3.4430 | 5650 | 0.1257 | - |
| 3.4735 | 5700 | 0.0877 | - |
| 3.5040 | 5750 | 0.0808 | - |
| 3.5344 | 5800 | 0.0926 | - |
| 3.5649 | 5850 | 0.0967 | - |
| 3.5954 | 5900 | 0.0401 | - |
| 3.6258 | 5950 | 0.0547 | - |
| 3.6563 | 6000 | 0.0872 | - |
| 3.6868 | 6050 | 0.0808 | - |
| 3.7172 | 6100 | 0.1125 | - |
| 3.7477 | 6150 | 0.1431 | - |
| 3.7782 | 6200 | 0.1039 | - |
| 3.8087 | 6250 | 0.061 | - |
| 3.8391 | 6300 | 0.1022 | - |
| 3.8696 | 6350 | 0.0394 | - |
| 3.9001 | 6400 | 0.0892 | - |
| 3.9305 | 6450 | 0.0535 | - |
| 3.9610 | 6500 | 0.0793 | - |
| 3.9915 | 6550 | 0.0462 | - |
| 4.0219 | 6600 | 0.0686 | - |
| 4.0524 | 6650 | 0.0506 | - |
| 4.0829 | 6700 | 0.1012 | - |
| 4.1133 | 6750 | 0.0852 | - |
| 4.1438 | 6800 | 0.0729 | - |
| 4.1743 | 6850 | 0.1007 | - |
| 4.2048 | 6900 | 0.0431 | - |
| 4.2352 | 6950 | 0.0683 | - |
| 4.2657 | 7000 | 0.0712 | - |
| 4.2962 | 7050 | 0.0732 | - |
| 4.3266 | 7100 | 0.0374 | - |
| 4.3571 | 7150 | 0.1015 | - |
| 4.3876 | 7200 | 0.15 | - |
| 4.4180 | 7250 | 0.0852 | - |
| 4.4485 | 7300 | 0.0714 | - |
| 4.4790 | 7350 | 0.0587 | - |
| 4.5094 | 7400 | 0.1335 | - |
| 4.5399 | 7450 | 0.1123 | - |
| 4.5704 | 7500 | 0.0538 | - |
| 4.6009 | 7550 | 0.0989 | - |
| 4.6313 | 7600 | 0.0878 | - |
| 4.6618 | 7650 | 0.0963 | - |
| 4.6923 | 7700 | 0.0991 | - |
| 4.7227 | 7750 | 0.0776 | - |
| 4.7532 | 7800 | 0.0663 | - |
| 4.7837 | 7850 | 0.0696 | - |
| 4.8141 | 7900 | 0.0704 | - |
| 4.8446 | 7950 | 0.0626 | - |
| 4.8751 | 8000 | 0.0657 | - |
| 4.9055 | 8050 | 0.0567 | - |
| 4.9360 | 8100 | 0.0619 | - |
| 4.9665 | 8150 | 0.0792 | - |
| 4.9970 | 8200 | 0.0671 | - |
| 5.0274 | 8250 | 0.1068 | - |
| 5.0579 | 8300 | 0.1111 | - |
| 5.0884 | 8350 | 0.0968 | - |
| 5.1188 | 8400 | 0.0577 | - |
| 5.1493 | 8450 | 0.0934 | - |
| 5.1798 | 8500 | 0.0854 | - |
| 5.2102 | 8550 | 0.0587 | - |
| 5.2407 | 8600 | 0.048 | - |
| 5.2712 | 8650 | 0.0829 | - |
| 5.3016 | 8700 | 0.0985 | - |
| 5.3321 | 8750 | 0.107 | - |
| 5.3626 | 8800 | 0.0662 | - |
| 5.3931 | 8850 | 0.0799 | - |
| 5.4235 | 8900 | 0.0948 | - |
| 5.4540 | 8950 | 0.087 | - |
| 5.4845 | 9000 | 0.0429 | - |
| 5.5149 | 9050 | 0.0699 | - |
| 5.5454 | 9100 | 0.0911 | - |
| 5.5759 | 9150 | 0.1268 | - |
| 5.6063 | 9200 | 0.1042 | - |
| 5.6368 | 9250 | 0.0642 | - |
| 5.6673 | 9300 | 0.0736 | - |
| 5.6977 | 9350 | 0.0329 | - |
| 5.7282 | 9400 | 0.126 | - |
| 5.7587 | 9450 | 0.0991 | - |
| 5.7892 | 9500 | 0.1038 | - |
| 5.8196 | 9550 | 0.0842 | - |
| 5.8501 | 9600 | 0.0623 | - |
| 5.8806 | 9650 | 0.0642 | - |
| 5.9110 | 9700 | 0.0902 | - |
| 5.9415 | 9750 | 0.0994 | - |
| 5.9720 | 9800 | 0.0685 | - |
| 6.0024 | 9850 | 0.0573 | - |
| 6.0329 | 9900 | 0.0537 | - |
| 6.0634 | 9950 | 0.0478 | - |
| 6.0938 | 10000 | 0.0513 | - |
| 6.1243 | 10050 | 0.0529 | - |
| 6.1548 | 10100 | 0.095 | - |
| 6.1853 | 10150 | 0.0578 | - |
| 6.2157 | 10200 | 0.0918 | - |
| 6.2462 | 10250 | 0.0594 | - |
| 6.2767 | 10300 | 0.1015 | - |
| 6.3071 | 10350 | 0.036 | - |
| 6.3376 | 10400 | 0.0524 | - |
| 6.3681 | 10450 | 0.0927 | - |
| 6.3985 | 10500 | 0.0934 | - |
| 6.4290 | 10550 | 0.0788 | - |
| 6.4595 | 10600 | 0.0842 | - |
| 6.4899 | 10650 | 0.0703 | - |
| 6.5204 | 10700 | 0.0684 | - |
| 6.5509 | 10750 | 0.0759 | - |
| 6.5814 | 10800 | 0.0271 | - |
| 6.6118 | 10850 | 0.0391 | - |
| 6.6423 | 10900 | 0.0895 | - |
| 6.6728 | 10950 | 0.054 | - |
| 6.7032 | 11000 | 0.0987 | - |
| 6.7337 | 11050 | 0.0577 | - |
| 6.7642 | 11100 | 0.0822 | - |
| 6.7946 | 11150 | 0.0986 | - |
| 6.8251 | 11200 | 0.0423 | - |
| 6.8556 | 11250 | 0.0672 | - |
| 6.8860 | 11300 | 0.0747 | - |
| 6.9165 | 11350 | 0.0873 | - |
| 6.9470 | 11400 | 0.106 | - |
| 6.9775 | 11450 | 0.0975 | - |
| 7.0079 | 11500 | 0.0957 | - |
| 7.0384 | 11550 | 0.0487 | - |
| 7.0689 | 11600 | 0.0698 | - |
| 7.0993 | 11650 | 0.0317 | - |
| 7.1298 | 11700 | 0.0732 | - |
| 7.1603 | 11750 | 0.1114 | - |
| 7.1907 | 11800 | 0.0689 | - |
| 7.2212 | 11850 | 0.1211 | - |
| 7.2517 | 11900 | 0.0753 | - |
| 7.2821 | 11950 | 0.062 | - |
| 7.3126 | 12000 | 0.075 | - |
| 7.3431 | 12050 | 0.0494 | - |
| 7.3736 | 12100 | 0.0724 | - |
| 7.4040 | 12150 | 0.0605 | - |
| 7.4345 | 12200 | 0.0508 | - |
| 7.4650 | 12250 | 0.0828 | - |
| 7.4954 | 12300 | 0.0512 | - |
| 7.5259 | 12350 | 0.1291 | - |
| 7.5564 | 12400 | 0.0459 | - |
| 7.5868 | 12450 | 0.0869 | - |
| 7.6173 | 12500 | 0.0379 | - |
| 7.6478 | 12550 | 0.1878 | - |
| 7.6782 | 12600 | 0.0824 | - |
| 7.7087 | 12650 | 0.0945 | - |
| 7.7392 | 12700 | 0.0763 | - |
| 7.7697 | 12750 | 0.0602 | - |
| 7.8001 | 12800 | 0.0342 | - |
| 7.8306 | 12850 | 0.0746 | - |
| 7.8611 | 12900 | 0.065 | - |
| 7.8915 | 12950 | 0.0749 | - |
| 7.9220 | 13000 | 0.0618 | - |
| 7.9525 | 13050 | 0.0567 | - |
| 7.9829 | 13100 | 0.069 | - |
| 8.0134 | 13150 | 0.0487 | - |
| 8.0439 | 13200 | 0.0578 | - |
| 8.0743 | 13250 | 0.0876 | - |
| 8.1048 | 13300 | 0.0942 | - |
| 8.1353 | 13350 | 0.0774 | - |
| 8.1658 | 13400 | 0.0557 | - |
| 8.1962 | 13450 | 0.0872 | - |
| 8.2267 | 13500 | 0.0652 | - |
| 8.2572 | 13550 | 0.088 | - |
| 8.2876 | 13600 | 0.05 | - |
| 8.3181 | 13650 | 0.0572 | - |
| 8.3486 | 13700 | 0.053 | - |
| 8.3790 | 13750 | 0.0745 | - |
| 8.4095 | 13800 | 0.1119 | - |
| 8.4400 | 13850 | 0.0909 | - |
| 8.4704 | 13900 | 0.0374 | - |
| 8.5009 | 13950 | 0.0515 | - |
| 8.5314 | 14000 | 0.0827 | - |
| 8.5619 | 14050 | 0.0925 | - |
| 8.5923 | 14100 | 0.0793 | - |
| 8.6228 | 14150 | 0.1123 | - |
| 8.6533 | 14200 | 0.0387 | - |
| 8.6837 | 14250 | 0.0898 | - |
| 8.7142 | 14300 | 0.0627 | - |
| 8.7447 | 14350 | 0.0863 | - |
| 8.7751 | 14400 | 0.1257 | - |
| 8.8056 | 14450 | 0.0553 | - |
| 8.8361 | 14500 | 0.0664 | - |
| 8.8665 | 14550 | 0.0641 | - |
| 8.8970 | 14600 | 0.0577 | - |
| 8.9275 | 14650 | 0.0672 | - |
| 8.9580 | 14700 | 0.0776 | - |
| 8.9884 | 14750 | 0.0951 | - |
| 9.0189 | 14800 | 0.0721 | - |
| 9.0494 | 14850 | 0.0609 | - |
| 9.0798 | 14900 | 0.0821 | - |
| 9.1103 | 14950 | 0.0477 | - |
| 9.1408 | 15000 | 0.0974 | - |
| 9.1712 | 15050 | 0.0534 | - |
| 9.2017 | 15100 | 0.0673 | - |
| 9.2322 | 15150 | 0.0549 | - |
| 9.2626 | 15200 | 0.0833 | - |
| 9.2931 | 15250 | 0.0957 | - |
| 9.3236 | 15300 | 0.0601 | - |
| 9.3541 | 15350 | 0.0702 | - |
| 9.3845 | 15400 | 0.0852 | - |
| 9.4150 | 15450 | 0.0576 | - |
| 9.4455 | 15500 | 0.1006 | - |
| 9.4759 | 15550 | 0.0697 | - |
| 9.5064 | 15600 | 0.0778 | - |
| 9.5369 | 15650 | 0.0778 | - |
| 9.5673 | 15700 | 0.0844 | - |
| 9.5978 | 15750 | 0.0724 | - |
| 9.6283 | 15800 | 0.0988 | - |
| 9.6587 | 15850 | 0.0699 | - |
| 9.6892 | 15900 | 0.0772 | - |
| 9.7197 | 15950 | 0.0757 | - |
| 9.7502 | 16000 | 0.0671 | - |
| 9.7806 | 16050 | 0.1057 | - |
| 9.8111 | 16100 | 0.075 | - |
| 9.8416 | 16150 | 0.0475 | - |
| 9.8720 | 16200 | 0.0572 | - |
| 9.9025 | 16250 | 0.1176 | - |
| 9.9330 | 16300 | 0.0552 | - |
| 9.9634 | 16350 | 0.1032 | - |
| 9.9939 | 16400 | 0.0935 | - |
| 0.0006 | 1 | 0.0579 | - |
| 0.0305 | 50 | 0.0231 | - |
| 0.0609 | 100 | 0.0598 | - |
| 0.0914 | 150 | 0.0541 | - |
| 0.1219 | 200 | 0.0534 | - |
| 0.1523 | 250 | 0.048 | - |
| 0.1828 | 300 | 0.0912 | - |
| 0.2133 | 350 | 0.0447 | - |
| 0.2438 | 400 | 0.0442 | - |
| 0.2742 | 450 | 0.0579 | - |
| 0.0006 | 1 | 0.0585 | - |
| 0.0305 | 50 | 0.0204 | - |
| 0.0609 | 100 | 0.0653 | - |
| 0.0914 | 150 | 0.0599 | - |
| 0.1219 | 200 | 0.0577 | - |
| 0.1523 | 250 | 0.0468 | - |
| 0.1828 | 300 | 0.0911 | - |
| 0.2133 | 350 | 0.0423 | - |
| 0.2438 | 400 | 0.0405 | - |
| 0.2742 | 450 | 0.0561 | - |
| 0.3047 | 500 | 0.0925 | - |
| 0.3352 | 550 | 0.0771 | - |
| 0.3656 | 600 | 0.0718 | - |
| 0.3961 | 650 | 0.0708 | - |
| 0.4266 | 700 | 0.0673 | - |
| 0.4570 | 750 | 0.1501 | - |
| 0.4875 | 800 | 0.0849 | - |
| 0.5180 | 850 | 0.1132 | - |
| 0.5484 | 900 | 0.0865 | - |
| 0.5789 | 950 | 0.0527 | - |
| 0.6094 | 1000 | 0.0552 | - |
| 0.6399 | 1050 | 0.0656 | - |
| 0.6703 | 1100 | 0.0648 | - |
| 0.7008 | 1150 | 0.0884 | - |
| 0.7313 | 1200 | 0.0803 | - |
| 0.7617 | 1250 | 0.083 | - |
| 0.7922 | 1300 | 0.0863 | - |
| 0.8227 | 1350 | 0.0731 | - |
| 0.8531 | 1400 | 0.0504 | - |
| 0.8836 | 1450 | 0.1039 | - |
| 0.9141 | 1500 | 0.0817 | - |
| 0.9445 | 1550 | 0.0655 | - |
| 0.9750 | 1600 | 0.0987 | - |
| 1.0055 | 1650 | 0.0905 | - |
| 1.0360 | 1700 | 0.088 | - |
| 1.0664 | 1750 | 0.0767 | - |
| 1.0969 | 1800 | 0.0574 | - |
| 1.1274 | 1850 | 0.0741 | - |
| 1.1578 | 1900 | 0.0529 | - |
| 1.1883 | 1950 | 0.0758 | - |
| 1.2188 | 2000 | 0.1253 | - |
| 1.2492 | 2050 | 0.1129 | - |
| 1.2797 | 2100 | 0.0852 | - |
| 1.3102 | 2150 | 0.0475 | - |
| 1.3406 | 2200 | 0.063 | - |
| 1.3711 | 2250 | 0.0893 | - |
| 1.4016 | 2300 | 0.0494 | - |
| 1.4321 | 2350 | 0.1083 | - |
| 1.4625 | 2400 | 0.0468 | - |
| 1.4930 | 2450 | 0.0902 | - |
| 1.5235 | 2500 | 0.0607 | - |
| 1.5539 | 2550 | 0.0571 | - |
| 1.5844 | 2600 | 0.0395 | - |
| 1.6149 | 2650 | 0.1184 | - |
| 1.6453 | 2700 | 0.0735 | - |
| 1.6758 | 2750 | 0.06 | - |
| 1.7063 | 2800 | 0.0646 | - |
| 1.7367 | 2850 | 0.1055 | - |
| 1.7672 | 2900 | 0.0592 | - |
| 1.7977 | 2950 | 0.0522 | - |
| 1.8282 | 3000 | 0.1025 | - |
| 1.8586 | 3050 | 0.0615 | - |
| 1.8891 | 3100 | 0.1491 | - |
| 1.9196 | 3150 | 0.0796 | - |
| 1.9500 | 3200 | 0.0768 | - |
| 1.9805 | 3250 | 0.0601 | - |
| 2.0110 | 3300 | 0.0543 | - |
| 2.0414 | 3350 | 0.1128 | - |
| 2.0719 | 3400 | 0.06 | - |
| 2.1024 | 3450 | 0.0994 | - |
| 2.1328 | 3500 | 0.1018 | - |
| 2.1633 | 3550 | 0.0915 | - |
| 2.1938 | 3600 | 0.0626 | - |
| 2.2243 | 3650 | 0.0454 | - |
| 2.2547 | 3700 | 0.0915 | - |
| 2.2852 | 3750 | 0.0334 | - |
| 2.3157 | 3800 | 0.0827 | - |
| 2.3461 | 3850 | 0.0709 | - |
| 2.3766 | 3900 | 0.0806 | - |
| 2.4071 | 3950 | 0.055 | - |
| 2.4375 | 4000 | 0.0571 | - |
| 2.4680 | 4050 | 0.1002 | - |
| 2.4985 | 4100 | 0.0492 | - |
| 2.5289 | 4150 | 0.1322 | - |
| 2.5594 | 4200 | 0.0961 | - |
| 2.5899 | 4250 | 0.0788 | - |
| 2.6204 | 4300 | 0.0243 | - |
| 2.6508 | 4350 | 0.0406 | - |
| 2.6813 | 4400 | 0.0786 | - |
| 2.7118 | 4450 | 0.0852 | - |
| 2.7422 | 4500 | 0.0789 | - |
| 2.7727 | 4550 | 0.0787 | - |
| 2.8032 | 4600 | 0.1152 | - |
| 2.8336 | 4650 | 0.0992 | - |
| 2.8641 | 4700 | 0.0599 | - |
| 2.8946 | 4750 | 0.0496 | - |
| 2.9250 | 4800 | 0.0444 | - |
| 2.9555 | 4850 | 0.0898 | - |
| 2.9860 | 4900 | 0.0422 | - |
| 3.0165 | 4950 | 0.0328 | - |
| 3.0469 | 5000 | 0.0584 | - |
| 3.0774 | 5050 | 0.052 | - |
| 3.1079 | 5100 | 0.0485 | - |
| 3.1383 | 5150 | 0.0542 | - |
| 3.1688 | 5200 | 0.0854 | - |
| 3.1993 | 5250 | 0.048 | - |
| 3.2297 | 5300 | 0.0417 | - |
| 3.2602 | 5350 | 0.0497 | - |
| 3.2907 | 5400 | 0.0809 | - |
| 3.3211 | 5450 | 0.074 | - |
| 3.3516 | 5500 | 0.0761 | - |
| 3.3821 | 5550 | 0.0768 | - |
| 3.4126 | 5600 | 0.0954 | - |
| 3.4430 | 5650 | 0.0955 | - |
| 3.4735 | 5700 | 0.0906 | - |
| 3.5040 | 5750 | 0.0916 | - |
| 3.5344 | 5800 | 0.0915 | - |
| 3.5649 | 5850 | 0.107 | - |
| 3.5954 | 5900 | 0.0327 | - |
| 3.6258 | 5950 | 0.0534 | - |
| 3.6563 | 6000 | 0.059 | - |
| 3.6868 | 6050 | 0.0806 | - |
| 3.7172 | 6100 | 0.0941 | - |
| 3.7477 | 6150 | 0.1368 | - |
| 3.7782 | 6200 | 0.0848 | - |
| 3.8087 | 6250 | 0.0625 | - |
| 3.8391 | 6300 | 0.103 | - |
| 3.8696 | 6350 | 0.0307 | - |
| 3.9001 | 6400 | 0.0716 | - |
| 3.9305 | 6450 | 0.0518 | - |
| 3.9610 | 6500 | 0.0645 | - |
| 3.9915 | 6550 | 0.0417 | - |
| 4.0219 | 6600 | 0.0588 | - |
| 4.0524 | 6650 | 0.047 | - |
| 4.0829 | 6700 | 0.0951 | - |
| 4.1133 | 6750 | 0.0689 | - |
| 4.1438 | 6800 | 0.0731 | - |
| 4.1743 | 6850 | 0.0785 | - |
| 4.2048 | 6900 | 0.0411 | - |
| 4.2352 | 6950 | 0.0568 | - |
| 4.2657 | 7000 | 0.0688 | - |
| 4.2962 | 7050 | 0.066 | - |
| 4.3266 | 7100 | 0.0313 | - |
| 4.3571 | 7150 | 0.1127 | - |
| 4.3876 | 7200 | 0.1347 | - |
| 4.4180 | 7250 | 0.0685 | - |
| 4.4485 | 7300 | 0.0693 | - |
| 4.4790 | 7350 | 0.053 | - |
| 4.5094 | 7400 | 0.1353 | - |
| 4.5399 | 7450 | 0.1057 | - |
| 4.5704 | 7500 | 0.0467 | - |
| 4.6009 | 7550 | 0.1059 | - |
| 4.6313 | 7600 | 0.0791 | - |
| 4.6618 | 7650 | 0.0928 | - |
| 4.6923 | 7700 | 0.0989 | - |
| 4.7227 | 7750 | 0.0619 | - |
| 4.7532 | 7800 | 0.0572 | - |
| 4.7837 | 7850 | 0.06 | - |
| 4.8141 | 7900 | 0.0711 | - |
| 4.8446 | 7950 | 0.0595 | - |
| 4.8751 | 8000 | 0.0675 | - |
| 4.9055 | 8050 | 0.0487 | - |
| 4.9360 | 8100 | 0.0569 | - |
| 4.9665 | 8150 | 0.0637 | - |
| 4.9970 | 8200 | 0.0634 | - |
| 5.0274 | 8250 | 0.093 | - |
| 5.0579 | 8300 | 0.1107 | - |
| 5.0884 | 8350 | 0.0883 | - |
| 5.1188 | 8400 | 0.051 | - |
| 5.1493 | 8450 | 0.1034 | - |
| 5.1798 | 8500 | 0.0832 | - |
| 5.2102 | 8550 | 0.0463 | - |
| 5.2407 | 8600 | 0.0596 | - |
| 5.2712 | 8650 | 0.078 | - |
| 5.3016 | 8700 | 0.0686 | - |
| 5.3321 | 8750 | 0.1053 | - |
| 5.3626 | 8800 | 0.0684 | - |
| 5.3931 | 8850 | 0.0684 | - |
| 5.4235 | 8900 | 0.092 | - |
| 5.4540 | 8950 | 0.088 | - |
| 5.4845 | 9000 | 0.0503 | - |
| 5.5149 | 9050 | 0.0752 | - |
| 5.5454 | 9100 | 0.0975 | - |
| 5.5759 | 9150 | 0.1306 | - |
| 5.6063 | 9200 | 0.1038 | - |
| 5.6368 | 9250 | 0.0573 | - |
| 5.6673 | 9300 | 0.0584 | - |
| 5.6977 | 9350 | 0.0309 | - |
| 5.7282 | 9400 | 0.1232 | - |
| 5.7587 | 9450 | 0.0991 | - |
| 5.7892 | 9500 | 0.1111 | - |
| 5.8196 | 9550 | 0.0845 | - |
| 5.8501 | 9600 | 0.0587 | - |
| 5.8806 | 9650 | 0.0589 | - |
| 5.9110 | 9700 | 0.0751 | - |
| 5.9415 | 9750 | 0.0929 | - |
| 5.9720 | 9800 | 0.0613 | - |
| 6.0024 | 9850 | 0.0578 | - |
| 6.0329 | 9900 | 0.0499 | - |
| 6.0634 | 9950 | 0.0435 | - |
| 6.0938 | 10000 | 0.0547 | - |
| 6.1243 | 10050 | 0.0549 | - |
| 6.1548 | 10100 | 0.0872 | - |
| 6.1853 | 10150 | 0.0509 | - |
| 6.2157 | 10200 | 0.0913 | - |
| 6.2462 | 10250 | 0.0581 | - |
| 6.2767 | 10300 | 0.0942 | - |
| 6.3071 | 10350 | 0.0273 | - |
| 6.3376 | 10400 | 0.0426 | - |
| 6.3681 | 10450 | 0.0825 | - |
| 6.3985 | 10500 | 0.0713 | - |
| 6.4290 | 10550 | 0.0698 | - |
| 6.4595 | 10600 | 0.0679 | - |
| 6.4899 | 10650 | 0.0631 | - |
| 6.5204 | 10700 | 0.0489 | - |
| 6.5509 | 10750 | 0.0599 | - |
| 6.5814 | 10800 | 0.033 | - |
| 6.6118 | 10850 | 0.0401 | - |
| 6.6423 | 10900 | 0.0782 | - |
| 6.6728 | 10950 | 0.0512 | - |
| 6.7032 | 11000 | 0.0939 | - |
| 6.7337 | 11050 | 0.0523 | - |
| 6.7642 | 11100 | 0.0784 | - |
| 6.7946 | 11150 | 0.0898 | - |
| 6.8251 | 11200 | 0.042 | - |
| 6.8556 | 11250 | 0.0616 | - |
| 6.8860 | 11300 | 0.0667 | - |
| 6.9165 | 11350 | 0.0807 | - |
| 6.9470 | 11400 | 0.1054 | - |
| 6.9775 | 11450 | 0.0961 | - |
| 7.0079 | 11500 | 0.0896 | - |
| 7.0384 | 11550 | 0.0463 | - |
| 7.0689 | 11600 | 0.065 | - |
| 7.0993 | 11650 | 0.0318 | - |
| 7.1298 | 11700 | 0.0692 | - |
| 7.1603 | 11750 | 0.1055 | - |
| 7.1907 | 11800 | 0.0619 | - |
| 7.2212 | 11850 | 0.1234 | - |
| 7.2517 | 11900 | 0.0698 | - |
| 7.2821 | 11950 | 0.0526 | - |
| 7.3126 | 12000 | 0.0695 | - |
| 7.3431 | 12050 | 0.051 | - |
| 7.3736 | 12100 | 0.0759 | - |
| 7.4040 | 12150 | 0.062 | - |
| 7.4345 | 12200 | 0.0509 | - |
| 7.4650 | 12250 | 0.0874 | - |
| 7.4954 | 12300 | 0.0534 | - |
| 7.5259 | 12350 | 0.1089 | - |
| 7.5564 | 12400 | 0.0516 | - |
| 7.5868 | 12450 | 0.0755 | - |
| 7.6173 | 12500 | 0.0295 | - |
| 7.6478 | 12550 | 0.1767 | - |
| 7.6782 | 12600 | 0.0744 | - |
| 7.7087 | 12650 | 0.0875 | - |
| 7.7392 | 12700 | 0.075 | - |
| 7.7697 | 12750 | 0.0583 | - |
| 7.8001 | 12800 | 0.0353 | - |
| 7.8306 | 12850 | 0.0638 | - |
| 7.8611 | 12900 | 0.045 | - |
| 7.8915 | 12950 | 0.0647 | - |
| 7.9220 | 13000 | 0.0593 | - |
| 7.9525 | 13050 | 0.0515 | - |
| 7.9829 | 13100 | 0.0705 | - |
| 8.0134 | 13150 | 0.0521 | - |
| 8.0439 | 13200 | 0.059 | - |
| 8.0743 | 13250 | 0.0758 | - |
| 8.1048 | 13300 | 0.0922 | - |
| 8.1353 | 13350 | 0.0859 | - |
| 8.1658 | 13400 | 0.0526 | - |
| 8.1962 | 13450 | 0.0892 | - |
| 8.2267 | 13500 | 0.0665 | - |
| 8.2572 | 13550 | 0.0711 | - |
| 8.2876 | 13600 | 0.0535 | - |
| 8.3181 | 13650 | 0.055 | - |
| 8.3486 | 13700 | 0.0516 | - |
| 8.3790 | 13750 | 0.0683 | - |
| 8.4095 | 13800 | 0.0959 | - |
| 8.4400 | 13850 | 0.0901 | - |
| 8.4704 | 13900 | 0.041 | - |
| 8.5009 | 13950 | 0.0464 | - |
| 8.5314 | 14000 | 0.0726 | - |
| 8.5619 | 14050 | 0.0959 | - |
| 8.5923 | 14100 | 0.0739 | - |
| 8.6228 | 14150 | 0.1083 | - |
| 8.6533 | 14200 | 0.0374 | - |
| 8.6837 | 14250 | 0.0767 | - |
| 8.7142 | 14300 | 0.0626 | - |
| 8.7447 | 14350 | 0.0847 | - |
| 8.7751 | 14400 | 0.1211 | - |
| 8.8056 | 14450 | 0.0457 | - |
| 8.8361 | 14500 | 0.0705 | - |
| 8.8665 | 14550 | 0.06 | - |
| 8.8970 | 14600 | 0.052 | - |
| 8.9275 | 14650 | 0.0677 | - |
| 8.9580 | 14700 | 0.0747 | - |
| 8.9884 | 14750 | 0.0877 | - |
| 9.0189 | 14800 | 0.0791 | - |
| 9.0494 | 14850 | 0.0573 | - |
| 9.0798 | 14900 | 0.0786 | - |
| 9.1103 | 14950 | 0.0376 | - |
| 9.1408 | 15000 | 0.0964 | - |
| 9.1712 | 15050 | 0.0542 | - |
| 9.2017 | 15100 | 0.0568 | - |
| 9.2322 | 15150 | 0.0583 | - |
| 9.2626 | 15200 | 0.0861 | - |
| 9.2931 | 15250 | 0.0994 | - |
| 9.3236 | 15300 | 0.0614 | - |
| 9.3541 | 15350 | 0.0689 | - |
| 9.3845 | 15400 | 0.0803 | - |
| 9.4150 | 15450 | 0.0599 | - |
| 9.4455 | 15500 | 0.0952 | - |
| 9.4759 | 15550 | 0.0597 | - |
| 9.5064 | 15600 | 0.0762 | - |
| 9.5369 | 15650 | 0.0718 | - |
| 9.5673 | 15700 | 0.0794 | - |
| 9.5978 | 15750 | 0.0721 | - |
| 9.6283 | 15800 | 0.0966 | - |
| 9.6587 | 15850 | 0.0604 | - |
| 9.6892 | 15900 | 0.0764 | - |
| 9.7197 | 15950 | 0.0707 | - |
| 9.7502 | 16000 | 0.0724 | - |
| 9.7806 | 16050 | 0.1072 | - |
| 9.8111 | 16100 | 0.0728 | - |
| 9.8416 | 16150 | 0.0516 | - |
| 9.8720 | 16200 | 0.0519 | - |
| 9.9025 | 16250 | 0.1077 | - |
| 9.9330 | 16300 | 0.0539 | - |
| 9.9634 | 16350 | 0.095 | - |
| 9.9939 | 16400 | 0.0957 | - |
| 0.0005 | 1 | 0.0632 | - |
| 0.0244 | 50 | 0.058 | - |
| 0.0488 | 100 | 0.0531 | - |
| 0.0731 | 150 | 0.0769 | - |
| 0.0975 | 200 | 0.0445 | - |
| 0.1219 | 250 | 0.0852 | - |
| 0.1463 | 300 | 0.058 | - |
| 0.1706 | 350 | 0.0611 | - |
| 0.1950 | 400 | 0.0772 | - |
| 0.2194 | 450 | 0.0806 | - |
| 0.2438 | 500 | 0.0686 | - |
| 0.2682 | 550 | 0.0591 | - |
| 0.2925 | 600 | 0.0838 | - |
| 0.3169 | 650 | 0.0862 | - |
| 0.3413 | 700 | 0.0641 | - |
| 0.3657 | 750 | 0.0628 | - |
| 0.3901 | 800 | 0.0725 | - |
| 0.4144 | 850 | 0.0756 | - |
| 0.4388 | 900 | 0.0686 | - |
| 0.4632 | 950 | 0.0789 | - |
| 0.4876 | 1000 | 0.1058 | - |
| 0.5119 | 1050 | 0.0682 | - |
| 0.5363 | 1100 | 0.0657 | - |
| 0.5607 | 1150 | 0.0531 | - |
| 0.5851 | 1200 | 0.0456 | - |
| 0.6095 | 1250 | 0.06 | - |
| 0.6338 | 1300 | 0.0567 | - |
| 0.6582 | 1350 | 0.0599 | - |
| 0.6826 | 1400 | 0.0743 | - |
| 0.7070 | 1450 | 0.0512 | - |
| 0.7314 | 1500 | 0.0805 | - |
| 0.7557 | 1550 | 0.1057 | - |
| 0.7801 | 1600 | 0.0714 | - |
| 0.8045 | 1650 | 0.0415 | - |
| 0.8289 | 1700 | 0.0531 | - |
| 0.8532 | 1750 | 0.0786 | - |
| 0.8776 | 1800 | 0.0867 | - |
| 0.9020 | 1850 | 0.0538 | - |
| 0.9264 | 1900 | 0.0734 | - |
| 0.9508 | 1950 | 0.0854 | - |
| 0.9751 | 2000 | 0.0584 | - |
| 0.9995 | 2050 | 0.0459 | - |
| 1.0239 | 2100 | 0.071 | - |
| 1.0483 | 2150 | 0.0716 | - |
| 1.0726 | 2200 | 0.0768 | - |
| 1.0970 | 2250 | 0.0778 | - |
| 1.1214 | 2300 | 0.1028 | - |
| 1.1458 | 2350 | 0.0598 | - |
| 1.1702 | 2400 | 0.0462 | - |
| 1.1945 | 2450 | 0.0494 | - |
| 1.2189 | 2500 | 0.0554 | - |
| 1.2433 | 2550 | 0.0645 | - |
| 1.2677 | 2600 | 0.0533 | - |
| 1.2921 | 2650 | 0.0404 | - |
| 1.3164 | 2700 | 0.0837 | - |
| 1.3408 | 2750 | 0.0832 | - |
| 1.3652 | 2800 | 0.0946 | - |
| 1.3896 | 2850 | 0.0807 | - |
| 1.4139 | 2900 | 0.0695 | - |
| 1.4383 | 2950 | 0.0436 | - |
| 1.4627 | 3000 | 0.0605 | - |
| 1.4871 | 3050 | 0.0918 | - |
| 1.5115 | 3100 | 0.0755 | - |
| 1.5358 | 3150 | 0.0745 | - |
| 1.5602 | 3200 | 0.0429 | - |
| 1.5846 | 3250 | 0.0651 | - |
| 1.6090 | 3300 | 0.0567 | - |
| 1.6333 | 3350 | 0.0679 | - |
| 1.6577 | 3400 | 0.0904 | - |
| 1.6821 | 3450 | 0.0671 | - |
| 1.7065 | 3500 | 0.0626 | - |
| 1.7309 | 3550 | 0.0439 | - |
| 1.7552 | 3600 | 0.1035 | - |
| 1.7796 | 3650 | 0.0818 | - |
| 1.8040 | 3700 | 0.1284 | - |
| 1.8284 | 3750 | 0.058 | - |
| 1.8528 | 3800 | 0.0608 | - |
| 1.8771 | 3850 | 0.0858 | - |
| 1.9015 | 3900 | 0.0611 | - |
| 1.9259 | 3950 | 0.0701 | - |
| 1.9503 | 4000 | 0.0882 | - |
| 1.9746 | 4050 | 0.0568 | - |
| 1.9990 | 4100 | 0.0591 | - |
| 2.0234 | 4150 | 0.0765 | - |
| 2.0478 | 4200 | 0.0697 | - |
| 2.0722 | 4250 | 0.0714 | - |
| 2.0965 | 4300 | 0.0438 | - |
| 2.1209 | 4350 | 0.0661 | - |
| 2.1453 | 4400 | 0.0626 | - |
| 2.1697 | 4450 | 0.0666 | - |
| 2.1941 | 4500 | 0.0583 | - |
| 2.2184 | 4550 | 0.088 | - |
| 2.2428 | 4600 | 0.0768 | - |
| 2.2672 | 4650 | 0.0528 | - |
| 2.2916 | 4700 | 0.0869 | - |
| 2.3159 | 4750 | 0.1001 | - |
| 2.3403 | 4800 | 0.0731 | - |
| 2.3647 | 4850 | 0.0858 | - |
| 2.3891 | 4900 | 0.0611 | - |
| 2.4135 | 4950 | 0.058 | - |
| 2.4378 | 5000 | 0.0725 | - |
| 2.4622 | 5050 | 0.0893 | - |
| 2.4866 | 5100 | 0.0649 | - |
| 2.5110 | 5150 | 0.0561 | - |
| 2.5353 | 5200 | 0.0569 | - |
| 2.5597 | 5250 | 0.0375 | - |
| 2.5841 | 5300 | 0.0925 | - |
| 2.6085 | 5350 | 0.0842 | - |
| 2.6329 | 5400 | 0.083 | - |
| 2.6572 | 5450 | 0.0713 | - |
| 2.6816 | 5500 | 0.1082 | - |
| 2.7060 | 5550 | 0.0718 | - |
| 2.7304 | 5600 | 0.0755 | - |
| 2.7548 | 5650 | 0.0863 | - |
| 2.7791 | 5700 | 0.081 | - |
| 2.8035 | 5750 | 0.0732 | - |
| 2.8279 | 5800 | 0.0769 | - |
| 2.8523 | 5850 | 0.0846 | - |
| 2.8766 | 5900 | 0.0794 | - |
| 2.9010 | 5950 | 0.0518 | - |
| 2.9254 | 6000 | 0.0495 | - |
| 2.9498 | 6050 | 0.0696 | - |
| 2.9742 | 6100 | 0.081 | - |
| 2.9985 | 6150 | 0.0505 | - |
| 3.0229 | 6200 | 0.0703 | - |
| 3.0473 | 6250 | 0.0738 | - |
| 3.0717 | 6300 | 0.07 | - |
| 3.0961 | 6350 | 0.0663 | - |
| 3.1204 | 6400 | 0.069 | - |
| 3.1448 | 6450 | 0.0665 | - |
| 3.1692 | 6500 | 0.0409 | - |
| 3.1936 | 6550 | 0.075 | - |
| 3.2179 | 6600 | 0.0519 | - |
| 3.2423 | 6650 | 0.0836 | - |
| 3.2667 | 6700 | 0.0631 | - |
| 3.2911 | 6750 | 0.0926 | - |
| 3.3155 | 6800 | 0.0443 | - |
| 3.3398 | 6850 | 0.0587 | - |
| 3.3642 | 6900 | 0.0654 | - |
| 3.3886 | 6950 | 0.0776 | - |
| 3.4130 | 7000 | 0.0563 | - |
| 3.4373 | 7050 | 0.0501 | - |
| 3.4617 | 7100 | 0.0549 | - |
| 3.4861 | 7150 | 0.0497 | - |
| 3.5105 | 7200 | 0.0782 | - |
| 3.5349 | 7250 | 0.0734 | - |
| 3.5592 | 7300 | 0.0704 | - |
| 3.5836 | 7350 | 0.062 | - |
| 3.6080 | 7400 | 0.0698 | - |
| 3.6324 | 7450 | 0.09 | - |
| 3.6568 | 7500 | 0.0585 | - |
| 3.6811 | 7550 | 0.0649 | - |
| 3.7055 | 7600 | 0.0685 | - |
| 3.7299 | 7650 | 0.0671 | - |
| 3.7543 | 7700 | 0.0576 | - |
| 3.7786 | 7750 | 0.0378 | - |
| 3.8030 | 7800 | 0.0679 | - |
| 3.8274 | 7850 | 0.0665 | - |
| 3.8518 | 7900 | 0.0701 | - |
| 3.8762 | 7950 | 0.0943 | - |
| 3.9005 | 8000 | 0.1062 | - |
| 3.9249 | 8050 | 0.0725 | - |
| 3.9493 | 8100 | 0.0595 | - |
| 3.9737 | 8150 | 0.0738 | - |
| 3.9980 | 8200 | 0.0793 | - |
| 4.0224 | 8250 | 0.0851 | - |
| 4.0468 | 8300 | 0.121 | - |
| 4.0712 | 8350 | 0.0919 | - |
| 4.0956 | 8400 | 0.0629 | - |
| 4.1199 | 8450 | 0.0518 | - |
| 4.1443 | 8500 | 0.0595 | - |
| 4.1687 | 8550 | 0.0684 | - |
| 4.1931 | 8600 | 0.0497 | - |
| 4.2175 | 8650 | 0.0375 | - |
| 4.2418 | 8700 | 0.0819 | - |
| 4.2662 | 8750 | 0.0781 | - |
| 4.2906 | 8800 | 0.0515 | - |
| 4.3150 | 8850 | 0.0756 | - |
| 4.3393 | 8900 | 0.0547 | - |
| 4.3637 | 8950 | 0.0875 | - |
| 4.3881 | 9000 | 0.0571 | - |
| 4.4125 | 9050 | 0.046 | - |
| 4.4369 | 9100 | 0.067 | - |
| 4.4612 | 9150 | 0.0646 | - |
| 4.4856 | 9200 | 0.0575 | - |
| 4.5100 | 9250 | 0.1137 | - |
| 4.5344 | 9300 | 0.0768 | - |
| 4.5588 | 9350 | 0.0542 | - |
| 4.5831 | 9400 | 0.0743 | - |
| 4.6075 | 9450 | 0.072 | - |
| 4.6319 | 9500 | 0.0606 | - |
| 4.6563 | 9550 | 0.0777 | - |
| 4.6806 | 9600 | 0.0435 | - |
| 4.7050 | 9650 | 0.065 | - |
| 4.7294 | 9700 | 0.0601 | - |
| 4.7538 | 9750 | 0.0579 | - |
| 4.7782 | 9800 | 0.0661 | - |
| 4.8025 | 9850 | 0.0569 | - |
| 4.8269 | 9900 | 0.0995 | - |
| 4.8513 | 9950 | 0.056 | - |
| 4.8757 | 10000 | 0.0705 | - |
| 4.9000 | 10050 | 0.066 | - |
| 4.9244 | 10100 | 0.0489 | - |
| 4.9488 | 10150 | 0.0709 | - |
| 4.9732 | 10200 | 0.0545 | - |
| 4.9976 | 10250 | 0.0886 | - |
| 5.0219 | 10300 | 0.0835 | - |
| 5.0463 | 10350 | 0.0635 | - |
| 5.0707 | 10400 | 0.066 | - |
| 5.0951 | 10450 | 0.0678 | - |
| 5.1195 | 10500 | 0.1006 | - |
| 5.1438 | 10550 | 0.0526 | - |
| 5.1682 | 10600 | 0.0691 | - |
| 5.1926 | 10650 | 0.0833 | - |
| 5.2170 | 10700 | 0.0512 | - |
| 5.2413 | 10750 | 0.0469 | - |
| 5.2657 | 10800 | 0.0837 | - |
| 5.2901 | 10850 | 0.0646 | - |
| 5.3145 | 10900 | 0.0843 | - |
| 5.3389 | 10950 | 0.0627 | - |
| 5.3632 | 11000 | 0.0503 | - |
| 5.3876 | 11050 | 0.0499 | - |
| 5.4120 | 11100 | 0.0823 | - |
| 5.4364 | 11150 | 0.0759 | - |
| 5.4608 | 11200 | 0.0436 | - |
| 5.4851 | 11250 | 0.0864 | - |
| 5.5095 | 11300 | 0.0792 | - |
| 5.5339 | 11350 | 0.0876 | - |
| 5.5583 | 11400 | 0.0535 | - |
| 5.5826 | 11450 | 0.0543 | - |
| 5.6070 | 11500 | 0.0549 | - |
| 5.6314 | 11550 | 0.0564 | - |
| 5.6558 | 11600 | 0.0454 | - |
| 5.6802 | 11650 | 0.061 | - |
| 5.7045 | 11700 | 0.0573 | - |
| 5.7289 | 11750 | 0.0655 | - |
| 5.7533 | 11800 | 0.0821 | - |
| 5.7777 | 11850 | 0.0608 | - |
| 5.8020 | 11900 | 0.0765 | - |
| 5.8264 | 11950 | 0.0807 | - |
| 5.8508 | 12000 | 0.0499 | - |
| 5.8752 | 12050 | 0.0862 | - |
| 5.8996 | 12100 | 0.0928 | - |
| 5.9239 | 12150 | 0.08 | - |
| 5.9483 | 12200 | 0.0553 | - |
| 5.9727 | 12250 | 0.0736 | - |
| 5.9971 | 12300 | 0.0576 | - |
| 6.0215 | 12350 | 0.0945 | - |
| 6.0458 | 12400 | 0.0669 | - |
| 6.0702 | 12450 | 0.0492 | - |
| 6.0946 | 12500 | 0.0795 | - |
| 6.1190 | 12550 | 0.0935 | - |
| 6.1433 | 12600 | 0.0554 | - |
| 6.1677 | 12650 | 0.0643 | - |
| 6.1921 | 12700 | 0.0715 | - |
| 6.2165 | 12750 | 0.0803 | - |
| 6.2409 | 12800 | 0.0745 | - |
| 6.2652 | 12850 | 0.0626 | - |
| 6.2896 | 12900 | 0.0539 | - |
| 6.3140 | 12950 | 0.0719 | - |
| 6.3384 | 13000 | 0.0465 | - |
| 6.3627 | 13050 | 0.0735 | - |
| 6.3871 | 13100 | 0.0637 | - |
| 6.4115 | 13150 | 0.0437 | - |
| 6.4359 | 13200 | 0.0744 | - |
| 6.4603 | 13250 | 0.072 | - |
| 6.4846 | 13300 | 0.0726 | - |
| 6.5090 | 13350 | 0.0721 | - |
| 6.5334 | 13400 | 0.0521 | - |
| 6.5578 | 13450 | 0.0575 | - |
| 6.5822 | 13500 | 0.0466 | - |
| 6.6065 | 13550 | 0.0572 | - |
| 6.6309 | 13600 | 0.0909 | - |
| 6.6553 | 13650 | 0.0524 | - |
| 6.6797 | 13700 | 0.0678 | - |
| 6.7040 | 13750 | 0.0548 | - |
| 6.7284 | 13800 | 0.0587 | - |
| 6.7528 | 13850 | 0.0575 | - |
| 6.7772 | 13900 | 0.0677 | - |
| 6.8016 | 13950 | 0.0452 | - |
| 6.8259 | 14000 | 0.0598 | - |
| 6.8503 | 14050 | 0.0642 | - |
| 6.8747 | 14100 | 0.0679 | - |
| 6.8991 | 14150 | 0.0371 | - |
| 6.9235 | 14200 | 0.0482 | - |
| 6.9478 | 14250 | 0.0497 | - |
| 6.9722 | 14300 | 0.0512 | - |
| 6.9966 | 14350 | 0.1054 | - |
| 7.0210 | 14400 | 0.0712 | - |
| 7.0453 | 14450 | 0.0646 | - |
| 7.0697 | 14500 | 0.1106 | - |
| 7.0941 | 14550 | 0.0642 | - |
| 7.1185 | 14600 | 0.0786 | - |
| 7.1429 | 14650 | 0.0581 | - |
| 7.1672 | 14700 | 0.0656 | - |
| 7.1916 | 14750 | 0.0756 | - |
| 7.2160 | 14800 | 0.0476 | - |
| 7.2404 | 14850 | 0.0817 | - |
| 7.2647 | 14900 | 0.0929 | - |
| 7.2891 | 14950 | 0.0547 | - |
| 7.3135 | 15000 | 0.0733 | - |
| 7.3379 | 15050 | 0.0762 | - |
| 7.3623 | 15100 | 0.0628 | - |
| 7.3866 | 15150 | 0.0601 | - |
| 7.4110 | 15200 | 0.0484 | - |
| 7.4354 | 15250 | 0.0551 | - |
| 7.4598 | 15300 | 0.0505 | - |
| 7.4842 | 15350 | 0.0437 | - |
| 7.5085 | 15400 | 0.0636 | - |
| 7.5329 | 15450 | 0.0624 | - |
| 7.5573 | 15500 | 0.0716 | - |
| 7.5817 | 15550 | 0.0508 | - |
| 7.6060 | 15600 | 0.0704 | - |
| 7.6304 | 15650 | 0.0604 | - |
| 7.6548 | 15700 | 0.0641 | - |
| 7.6792 | 15750 | 0.0653 | - |
| 7.7036 | 15800 | 0.0598 | - |
| 7.7279 | 15850 | 0.0829 | - |
| 7.7523 | 15900 | 0.0593 | - |
| 7.7767 | 15950 | 0.0631 | - |
| 7.8011 | 16000 | 0.0819 | - |
| 7.8255 | 16050 | 0.0776 | - |
| 7.8498 | 16100 | 0.0603 | - |
| 7.8742 | 16150 | 0.0499 | - |
| 7.8986 | 16200 | 0.0637 | - |
| 7.9230 | 16250 | 0.0639 | - |
| 7.9473 | 16300 | 0.0559 | - |
| 7.9717 | 16350 | 0.0621 | - |
| 7.9961 | 16400 | 0.0639 | - |
| 8.0205 | 16450 | 0.1066 | - |
| 8.0449 | 16500 | 0.0686 | - |
| 8.0692 | 16550 | 0.063 | - |
| 8.0936 | 16600 | 0.0789 | - |
| 8.1180 | 16650 | 0.0458 | - |
| 8.1424 | 16700 | 0.0622 | - |
| 8.1667 | 16750 | 0.0748 | - |
| 8.1911 | 16800 | 0.0355 | - |
| 8.2155 | 16850 | 0.0648 | - |
| 8.2399 | 16900 | 0.0618 | - |
| 8.2643 | 16950 | 0.0908 | - |
| 8.2886 | 17000 | 0.0544 | - |
| 8.3130 | 17050 | 0.0888 | - |
| 8.3374 | 17100 | 0.0531 | - |
| 8.3618 | 17150 | 0.0905 | - |
| 8.3862 | 17200 | 0.0811 | - |
| 8.4105 | 17250 | 0.0643 | - |
| 8.4349 | 17300 | 0.0775 | - |
| 8.4593 | 17350 | 0.0518 | - |
| 8.4837 | 17400 | 0.0683 | - |
| 8.5080 | 17450 | 0.0946 | - |
| 8.5324 | 17500 | 0.0642 | - |
| 8.5568 | 17550 | 0.0654 | - |
| 8.5812 | 17600 | 0.0682 | - |
| 8.6056 | 17650 | 0.0467 | - |
| 8.6299 | 17700 | 0.0811 | - |
| 8.6543 | 17750 | 0.077 | - |
| 8.6787 | 17800 | 0.0376 | - |
| 8.7031 | 17850 | 0.1028 | - |
| 8.7275 | 17900 | 0.0833 | - |
| 8.7518 | 17950 | 0.0591 | - |
| 8.7762 | 18000 | 0.0613 | - |
| 8.8006 | 18050 | 0.0633 | - |
| 8.8250 | 18100 | 0.0774 | - |
| 8.8493 | 18150 | 0.0609 | - |
| 8.8737 | 18200 | 0.0732 | - |
| 8.8981 | 18250 | 0.085 | - |
| 8.9225 | 18300 | 0.0762 | - |
| 8.9469 | 18350 | 0.0518 | - |
| 8.9712 | 18400 | 0.0806 | - |
| 8.9956 | 18450 | 0.0467 | - |
| 9.0200 | 18500 | 0.0467 | - |
| 9.0444 | 18550 | 0.0836 | - |
| 9.0687 | 18600 | 0.0452 | - |
| 9.0931 | 18650 | 0.0503 | - |
| 9.1175 | 18700 | 0.0624 | - |
| 9.1419 | 18750 | 0.0605 | - |
| 9.1663 | 18800 | 0.0829 | - |
| 9.1906 | 18850 | 0.0497 | - |
| 9.2150 | 18900 | 0.0575 | - |
| 9.2394 | 18950 | 0.0645 | - |
| 9.2638 | 19000 | 0.0956 | - |
| 9.2882 | 19050 | 0.045 | - |
| 9.3125 | 19100 | 0.0768 | - |
| 9.3369 | 19150 | 0.0793 | - |
| 9.3613 | 19200 | 0.0839 | - |
| 9.3857 | 19250 | 0.0518 | - |
| 9.4100 | 19300 | 0.0445 | - |
| 9.4344 | 19350 | 0.055 | - |
| 9.4588 | 19400 | 0.0649 | - |
| 9.4832 | 19450 | 0.0673 | - |
| 9.5076 | 19500 | 0.0492 | - |
| 9.5319 | 19550 | 0.0733 | - |
| 9.5563 | 19600 | 0.0879 | - |
| 9.5807 | 19650 | 0.0672 | - |
| 9.6051 | 19700 | 0.0612 | - |
| 9.6294 | 19750 | 0.0661 | - |
| 9.6538 | 19800 | 0.066 | - |
| 9.6782 | 19850 | 0.0661 | - |
| 9.7026 | 19900 | 0.0738 | - |
| 9.7270 | 19950 | 0.0728 | - |
| 9.7513 | 20000 | 0.0595 | - |
| 9.7757 | 20050 | 0.0601 | - |
| 9.8001 | 20100 | 0.0441 | - |
| 9.8245 | 20150 | 0.0768 | - |
| 9.8489 | 20200 | 0.0636 | - |
| 9.8732 | 20250 | 0.0796 | - |
| 9.8976 | 20300 | 0.0584 | - |
| 9.9220 | 20350 | 0.0801 | - |
| 9.9464 | 20400 | 0.0569 | - |
| 9.9707 | 20450 | 0.0552 | - |
| 9.9951 | 20500 | 0.0684 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Asmamalica/my_awesome_model
|
Asmamalica
| 2024-01-19T16:08:03Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T14:30:38Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Asmamalica/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Asmamalica/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0025
- Validation Loss: 0.0345
- Train Accuracy: 0.9935
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9910, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0494 | 0.0275 | 0.991 | 0 |
| 0.0091 | 0.0282 | 0.993 | 1 |
| 0.0025 | 0.0345 | 0.9935 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
malhajar/Mistral-7B-v0.1-arabic
|
malhajar
| 2024-01-19T16:04:03Z | 169 | 9 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ar",
"en",
"dataset:malhajar/alpaca-gpt4-ar",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T16:00:00Z |
---
license: apache-2.0
datasets:
- malhajar/alpaca-gpt4-ar
language:
- ar
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
malhajar/Mistral-7B-Instruct-v0.2-turkish is a finetuned version of [`Mistral-7B-v0.1`]( https://huggingface.co/mistralai/Mistral-7B-v0.1) using SFT Training and Freeze method.
This model can answer information in a chat format as it is finetuned specifically on instructions specifically [`alpaca-gpt4-ar`]( https://huggingface.co/datasets/malhajar/alpaca-gpt4-ar)
### Model Description
- **Developed by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
- **Language(s) (NLP):** Arabic
- **Finetuned from model:** [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## How to Get Started with the Model
Use the code sample provided in the original post to interact with the model.
```python
from transformers import AutoTokenizer,AutoModelForCausalLM
model_id = "malhajar/Mistral-7B-v0.1-arabic"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
torch_dtype=torch.float16,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_id)
question: "ما هي الحياة؟"
# For generating a response
prompt = '''
### Instruction: {question} ### Response:
'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,repetition_penalty=1.3
top_p=0.95,trust_remote_code=True,)
response = tokenizer.decode(output[0])
print(response)
```
|
GAYATHIRI-12/my_awesome_model
|
GAYATHIRI-12
| 2024-01-19T16:02:23Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T14:31:01Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: GAYATHIRI-12/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GAYATHIRI-12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0027
- Validation Loss: 0.0348
- Train Accuracy: 0.9945
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9910, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0457 | 0.0273 | 0.9935 | 0 |
| 0.0076 | 0.0281 | 0.9945 | 1 |
| 0.0027 | 0.0348 | 0.9945 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jenny1998/bert_adaptation_peppa_pig
|
jenny1998
| 2024-01-19T16:02:23Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-19T16:02:05Z |
---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9413 | 1.0 | 35 | 3.4740 |
| 2.9472 | 2.0 | 70 | 2.7919 |
| 2.6305 | 3.0 | 105 | 2.4121 |
| 2.3964 | 4.0 | 140 | 2.4462 |
| 2.3435 | 5.0 | 175 | 2.5296 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
FelixChao/Voldemort-10B
|
FelixChao
| 2024-01-19T15:58:49Z | 1,365 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"FelixChao/WizardDolphin-7B",
"SanjiWatsuki/Silicon-Maid-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T07:39:19Z |
---
license: apache-2.0
tags:
- merge
- FelixChao/WizardDolphin-7B
- SanjiWatsuki/Silicon-Maid-7B
---
# Voldemort-10B
Voldemort-10B is a merge of the following models:
* [FelixChao/WizardDolphin-7B](https://huggingface.co/FelixChao/WizardDolphin-7B)
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: FelixChao/WizardDolphin-7B
layer_range: [0, 24]
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "FelixChao/Voldemort-10B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
HarikaR/intent_finnish_using_mobileBERT
|
HarikaR
| 2024-01-19T15:55:07Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"mobilebert",
"text-classification",
"generated_from_keras_callback",
"base_model:google/mobilebert-uncased",
"base_model:finetune:google/mobilebert-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T10:07:10Z |
---
license: apache-2.0
base_model: google/mobilebert-uncased
tags:
- generated_from_keras_callback
model-index:
- name: HarikaR/intent_finnish_using_mobileBERT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HarikaR/intent_finnish_using_mobileBERT
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5235
- Validation Loss: 0.4924
- Train Accuracy: 0.8168
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 25935, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5893 | 0.4924 | 0.8168 | 0 |
| 0.5235 | 0.4924 | 0.8168 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
halbihn/NeuralHermes-2.5-Mistral-7B
|
halbihn
| 2024-01-19T15:51:42Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"conversational",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T03:49:18Z |
---
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
---
<center><img src="https://i.imgur.com/qIhaFNM.png"></center>
# NeuralHermes 2.5 - Mistral 7B
NeuralHermes is based on the [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) dataset. It surpasses the original model on most benchmarks (see results).
It is directly inspired by the RLHF process described by [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)'s authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on [Google Colab](https://colab.research.google.com/drive/1h4tAJStIef_BcO-OkY97X9_OFgKnFrLl). It required an A100 GPU for about an hour.
## Quantized models
* **GGUF**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GGUF
* **AWQ**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-AWQ
* **GPTQ**: https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GPTQ
* **EXL2**:
* 3.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-3.0bpw-h6-exl2
* 4.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-4.0bpw-h6-exl2
* 5.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-5.0bpw-h6-exl2
* 6.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-6.0bpw-h6-exl2
* 8.0bpw: https://huggingface.co/LoneStriker/NeuralHermes-2.5-Mistral-7B-8.0bpw-h8-exl2
## Results
**Update:** NeuralHermes-2.5 became the best Hermes-based model on the Open LLM leaderboard and one of the very best 7b models. 🎉

Teknium (author of OpenHermes-2.5-Mistral-7B) benchmarked the model ([see his tweet](https://twitter.com/Teknium1/status/1729955709377503660)).
Results are improved on every benchmark: **AGIEval** (from 43.07% to 43.62%), **GPT4All** (from 73.12% to 73.25%), and **TruthfulQA**.
### AGIEval

### GPT4All

### TruthfulQA

You can view the Weights & Biases report [here](https://api.wandb.ai/links/halbihn/uem1q2dj).
## Usage
You can run this model using [LM Studio](https://lmstudio.ai/) or any other frontend.
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
model_id = "halbihn/NeuralHermes-2.5-Mistral-7B"
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
response = sequences[0]['generated_text'].split("<|im_start|>assistant")[-1].strip()
print(response)
# streaming example
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
model_id = "halbihn/NeuralHermes-2.5-Mistral-7B"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model.to(device)
def stream(
user_prompt: str,
max_tokens: int = 200,
) -> None:
"""Text streaming example
"""
system_prompt = 'Below is a conversation between Human and AI assistant named Mistral\n'
message = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
prompt = tokenizer.apply_chat_template(
message,
add_generation_prompt=True,
tokenize=False,
)
inputs = tokenizer([prompt], return_tensors="pt").to(device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=max_tokens)
stream("Tell me about the future")
>>> The future is a vast and uncertain expanse, shaped by the collective actions and innovations of humanity. It is a blend of possibilities, technological advancements, and societal changes. Some potential aspects of the future include:
>>>
>>> 1. Technological advancements: Artificial intelligence, quantum computing, and biotechnology are expected to continue evolving, leading to breakthroughs in fields like medicine, energy, and communication.
>>>
>>> 2. Space exploration: As technology progresses, space travel may become more accessible, enabling humans to establish colonies on other planets and explore the cosmos further.
>>>
>>> 3. Climate change mitigation: The future will likely see increased efforts to combat climate change through renewable energy sources, carbon capture technologies, and sustainable practices.
>>>
>>> 4. Artificial intelligence integration: AI will likely become more integrated into daily life, assisting with tasks, automating jobs, and even influencing decision-making processes in various industries.
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
|
le-Greg/Unit2-taxi
|
le-Greg
| 2024-01-19T15:47:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T15:47:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Unit2-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="le-Greg/Unit2-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
orutra11/distilbert-base-uncased-finetuned
|
orutra11
| 2024-01-19T15:45:10Z | 89 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T15:45:00Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
le-Greg/q-FrozenLake-v1-4x4-noSlippery
|
le-Greg
| 2024-01-19T15:44:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-19T15:44:36Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="le-Greg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nrburnett/my_awesome_eli5_mlm_model
|
nrburnett
| 2024-01-19T15:40:28Z | 173 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-19T15:18:55Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2465 | 1.0 | 1320 | 2.0611 |
| 2.1703 | 2.0 | 2640 | 2.0379 |
| 2.1573 | 3.0 | 3960 | 2.0117 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.