modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 19:17:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 18:30:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jysssacc/roberta-base_PrefixTuning_lr5e-05_bs4_epoch1_wd0.01
|
jysssacc
| 2024-01-09T10:02:26Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-09T10:02:03Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: roberta-base_PrefixTuning_lr5e-05_bs4_epoch1_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_PrefixTuning_lr5e-05_bs4_epoch1_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 19.1081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 13.3356 | 1.0 | 157 | 19.1081 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
SharonTudi/DIALOGUE2
|
SharonTudi
| 2024-01-09T09:55:53Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-14T09:37:05Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: DIALOGUE2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DIALOGUE2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3422
- Precision: 0.6751
- Recall: 0.6150
- F1: 0.6316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.3364 | 1.79 | 25 | 0.3800 | 0.6751 | 0.6150 | 0.6316 |
| 0.3019 | 3.57 | 50 | 0.3579 | 0.6751 | 0.6150 | 0.6316 |
| 0.211 | 5.36 | 75 | 0.3417 | 0.6751 | 0.6150 | 0.6316 |
| 0.2035 | 7.14 | 100 | 0.3409 | 0.6751 | 0.6150 | 0.6316 |
| 0.1817 | 8.93 | 125 | 0.3422 | 0.6751 | 0.6150 | 0.6316 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1
|
mmnga
| 2024-01-09T09:52:44Z | 28 | 18 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-16T19:09:26Z |
---
language:
- fr
- it
- de
- es
- en
license: apache-2.0
tags:
- moe
inference: false
---
# Model Card for Mixtral-Fusion-4x7B-Instruct-v0.1
This model is an experimental model created by merging [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) experts.
# How we merged experts
Changed to merge using slerp.
[Discussion](https://huggingface.co/mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1/discussions/2)
[old merge version](https://huggingface.co/mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1/tree/v0.1.0)
~~We simply take the average of every two experts.weight.~~
~~The same goes for gate.weight.~~
# How To Convert
use colab cpu-high-memory.
[convert_mixtral_8x7b_to_4x7b.ipynb](https://huggingface.co/mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1/blob/main/notebook/convert_mixtral_8x7b_to_4x7b.ipynb)
# OtherModels
[mmnga/Mixtral-Extraction-4x7B-Instruct-v0.1](https://huggingface.co/mmnga/Mixtral-Extraction-4x7B-Instruct-v0.1)
# Usage
~~~python
pip install git+https://github.com/huggingface/transformers --upgrade
pip install torch accelerate bitsandbytes flash_attn
~~~
~~~python
from transformers import AutoTokenizer, AutoModelForCausalLM, MixtralForCausalLM
import torch
model_name_or_path = "mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = MixtralForCausalLM.from_pretrained(model_name_or_path, load_in_8bit=True)
text = "[INST] What was John Holt's vision on education? [/INST] "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
~~~
|
Naati101/tb
|
Naati101
| 2024-01-09T09:47:58Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"image-classification",
"medical",
"region:us"
] |
image-classification
| 2024-01-09T09:47:19Z |
---
library_name: keras
tags:
- image-classification
- medical
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
prashantyai/sd-class-butterflies-32
|
prashantyai
| 2024-01-09T09:46:55Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-01-09T09:46:23Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('prashantyai/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
karandomguy/TuneNews
|
karandomguy
| 2024-01-09T09:38:03Z | 3 | 0 |
peft
|
[
"peft",
"text-generation",
"doi:10.57967/hf/1587",
"license:mit",
"region:us"
] |
text-generation
| 2023-12-30T22:52:07Z |
---
library_name: peft
license: mit
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
SE6446/Phasmid-2_v2
|
SE6446
| 2024-01-09T09:34:41Z | 22 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"phi",
"text-generation",
"axolotl",
"generated_from_trainer",
"custom_code",
"dataset:PygmalionAI/PIPPA",
"dataset:HuggingFaceH4/no_robots",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-08T09:59:40Z |
---
inference: false
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: Phasmid-2_v2
results: []
datasets:
- PygmalionAI/PIPPA
- HuggingFaceH4/no_robots
---
```
_ (`-. ('-. .-. ('-. .-') _ .-') _ .-') _
( (OO )( OO ) / ( OO ).-. ( OO ).( '.( OO )_ ( ( OO) )
_.` \,--. ,--. / . --. /(_)---\_),--. ,--.) ,-.-') \ .'_
(__...--''| | | | | \-. \ / _ | | `.' | | |OO),`'--..._)
| / | || .| |.-'-' | |\ :` `. | | | | \| | \ '
| |_.' || | \| |_.' | '..`''.)| |'.'| | | |(_/| | ' |
| .___.'| .-. | | .-. |.-._) \| | | | ,| |_.'| | / :
| | | | | | | | | |\ /| | | |(_| | | '--' /
`--' `--' `--' `--' `--' `-----' `--' `--' `--' `-------'
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: microsoft/phi-2
model_type: PhiForCausalLM
tokenizer_type: AutoTokenizer
is_llama_derived_model: false
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: SE6446/SE6446_phasmid_ds
type: completion
hub_model_id: SE6446/Phasmid-2_v2
hub_strategy: every_save
use_auth_token: true
dataset_prepared_path: /phasmid-2-ds-path
val_set_size: 0.05
output_dir: ./phasmid-sft-out
sequence_len: 2048
sample_packing: true
pad_to_sequence_len:
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_torch
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0
lr_scheduler: cosine
learning_rate: 0.0003
train_on_inputs: false
group_by_length: true
bf16: true
fp16: false
tf32: true
gradient_checkpointing:
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention:
warmup_steps: 100
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
resize_token_embeddings_to_32x: true
special_tokens:
bos_token: "<|endoftext|>"
eos_token: "<|endoftext|>"
unk_token: "<|endoftext|>"
pad_token: "<|endoftext|>"
```
</details><br>
# Phasmid-2_v2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on a mix of no_robots and the PIPPA dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2924
## Model description
Phasmid-2 has been trained on intructional data and thus can perform far better at instruction following than phi-2. However I have not extensively tested the model.
## Intended uses & limitations
This model is little more than a side project and I shall treat it as such.
Phasmid-2 (due to it's size), can still suffer from problematic hallucinations and poor information. No effort was made to reduce potentially toxic responses, as such you should train this model further if you require it to do so.
## Inference
Ensure that eniops is installed
```
pip install einops
```
Phi doesn't like device_map = auto, therefore you should specify as like the following:
1. FP16 / Flash-Attention / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2_v2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, device_map="cuda", trust_remote_code=True)
```
2. FP16 / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2_v2", torch_dtype="auto", device_map="cuda", trust_remote_code=True)
```
3. FP32 / CUDA:
```python
model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2_v2", torch_dtype=torch.float32, device_map="cuda", trust_remote_code=True)
```
4. FP32 / CPU:
```python
model = AutoModelForCausalLM.from_pretrained("SE6446/Phasmid-2_v2", torch_dtype=torch.float32, device_map="cpu", trust_remote_code=True)
```
And then use the following snippet
```python
tokenizer = AutoTokenizer.from_pretrained("SE6446/Phasmid-2_v2", trust_remote_code=True, torch_dtype="auto")
inputs = tokenizer('''SYSTEM: You are a helpful assistant. Please answer truthfully and politely. {custom_prompt}\n
USER: {{userinput}}\n
ASSISTANT: {{character name if applicable}}:''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
it should generate after "ASSISTANT:".
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3313 | 0.0 | 1 | 2.1374 |
| 2.5755 | 0.25 | 1319 | 2.5281 |
| 2.4864 | 0.5 | 2638 | 2.5314 |
| 2.0961 | 0.75 | 3957 | 2.4697 |
| 2.6547 | 1.0 | 5276 | 2.4213 |
| 2.1235 | 1.24 | 6595 | 2.3926 |
| 1.8875 | 1.49 | 7914 | 2.3233 |
| 0.9059 | 1.74 | 9233 | 2.2590 |
| 2.2046 | 1.99 | 10552 | 2.1985 |
| 1.1938 | 2.23 | 11871 | 2.2555 |
| 1.1425 | 2.48 | 13190 | 2.2393 |
| 0.6688 | 2.73 | 14509 | 2.2237 |
| 1.1111 | 2.98 | 15828 | 2.2126 |
| 0.651 | 3.21 | 17147 | 2.2859 |
| 0.8669 | 3.46 | 18466 | 2.2914 |
| 0.4149 | 3.71 | 19785 | 2.2924 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
HackerCIS/distilbert-base-uncased-finetuned-emotion
|
HackerCIS
| 2024-01-09T09:29:43Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-09T09:11:34Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9175
- name: F1
type: f1
value: 0.9173530455189519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2326
- Accuracy: 0.9175
- F1: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8354 | 1.0 | 250 | 0.3426 | 0.901 | 0.8997 |
| 0.263 | 2.0 | 500 | 0.2326 | 0.9175 | 0.9174 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
nutorbit/yi-6b-xllm
|
nutorbit
| 2024-01-09T09:05:18Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:01-ai/Yi-6B",
"base_model:adapter:01-ai/Yi-6B",
"region:us"
] | null | 2024-01-09T09:03:31Z |
---
library_name: peft
base_model: 01-ai/Yi-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
0x7o/nanoFialka-v1
|
0x7o
| 2024-01-09T09:00:16Z | 103 | 4 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"ru",
"dataset:0x7194633/fialka-v3-data",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T08:53:01Z |
---
license: apache-2.0
datasets:
- 0x7194633/fialka-v3-data
language:
- ru
pipeline_tag: text-generation
---
# Nano Fialka v1.0
## Description
This is a test model trained for non-serious tasks. For a production environment, use [Fialka 13B](https://huggingface.co/collections/0x7194633/fialka-llms-658a87c2003ceee6937a0d2e).
## Usage
The model has a query format as in zephyr.
```
<|user|>
Что такое мем?</s>
<|assistant|>
Мем (англ. meme) — это единица культурной информации, которая распространяется в социальных сетях и других онлайн-платформах с помощью цифровых технологий или через физический контакт. Мемы могут быть связаны между собой тематически или иметь общие черты, такие как использование определенных слов или фраз для создания определенного настроения или выражения эмоций. Они также могут содержать информацию о культуре, истории или науке, которую можно использовать для обучения новым вещам или расширения кругозора.
```
|
tonitt97/robertuito-allData-finetuned-class
|
tonitt97
| 2024-01-09T08:57:12Z | 176 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:pysentimiento/robertuito-base-uncased",
"base_model:finetune:pysentimiento/robertuito-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-09T08:56:51Z |
---
base_model: pysentimiento/robertuito-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- recall
- accuracy
model-index:
- name: robertuito-allData-finetuned-class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robertuito-allData-finetuned-class
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6512
- F1: 0.7470
- Recall: 0.7524
- Accuracy: 0.7677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.989919952299843e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 15
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|
| No log | 1.0 | 103 | 0.6829 | 0.7074 | 0.7162 | 0.7399 |
| No log | 2.0 | 206 | 0.6096 | 0.7326 | 0.7250 | 0.7632 |
| No log | 3.0 | 309 | 0.6512 | 0.7470 | 0.7524 | 0.7677 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
uttam333/layoutlm-custom
|
uttam333
| 2024-01-09T08:41:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-09T08:34:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: layoutlm-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-custom
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1583
- Noise: {'precision': 0.8818897637795275, 'recall': 0.8736349453978159, 'f1': 0.8777429467084641, 'number': 641}
- Signal: {'precision': 0.861198738170347, 'recall': 0.853125, 'f1': 0.8571428571428572, 'number': 640}
- Overall Precision: 0.8716
- Overall Recall: 0.8634
- Overall F1: 0.8675
- Overall Accuracy: 0.9656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Noise | Signal | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3882 | 1.0 | 18 | 0.2617 | {'precision': 0.6654804270462633, 'recall': 0.5834633385335414, 'f1': 0.6217788861180383, 'number': 641} | {'precision': 0.6149732620320856, 'recall': 0.5390625, 'f1': 0.5745212323064114, 'number': 640} | 0.6402 | 0.5613 | 0.5982 | 0.8986 |
| 0.1694 | 2.0 | 36 | 0.1752 | {'precision': 0.7387820512820513, 'recall': 0.719188767550702, 'f1': 0.7288537549407115, 'number': 641} | {'precision': 0.709470304975923, 'recall': 0.690625, 'f1': 0.6999208234362629, 'number': 640} | 0.7241 | 0.7049 | 0.7144 | 0.9296 |
| 0.1039 | 3.0 | 54 | 0.1356 | {'precision': 0.7865168539325843, 'recall': 0.7644305772230889, 'f1': 0.7753164556962026, 'number': 641} | {'precision': 0.77491961414791, 'recall': 0.753125, 'f1': 0.7638668779714739, 'number': 640} | 0.7807 | 0.7588 | 0.7696 | 0.9439 |
| 0.064 | 4.0 | 72 | 0.1342 | {'precision': 0.8220472440944881, 'recall': 0.8143525741029641, 'f1': 0.8181818181818181, 'number': 641} | {'precision': 0.8028391167192429, 'recall': 0.7953125, 'f1': 0.7990580847723705, 'number': 640} | 0.8125 | 0.8048 | 0.8086 | 0.9522 |
| 0.0433 | 5.0 | 90 | 0.1241 | {'precision': 0.8544303797468354, 'recall': 0.8424336973478939, 'f1': 0.8483896307934014, 'number': 641} | {'precision': 0.8320126782884311, 'recall': 0.8203125, 'f1': 0.8261211644374509, 'number': 640} | 0.8432 | 0.8314 | 0.8373 | 0.9601 |
| 0.0293 | 6.0 | 108 | 0.1274 | {'precision': 0.8650793650793651, 'recall': 0.8502340093603744, 'f1': 0.8575924468922109, 'number': 641} | {'precision': 0.8378378378378378, 'recall': 0.8234375, 'f1': 0.830575256107171, 'number': 640} | 0.8515 | 0.8368 | 0.8441 | 0.9617 |
| 0.0199 | 7.0 | 126 | 0.1372 | {'precision': 0.8722397476340694, 'recall': 0.8627145085803433, 'f1': 0.8674509803921568, 'number': 641} | {'precision': 0.8530805687203792, 'recall': 0.84375, 'f1': 0.8483896307934015, 'number': 640} | 0.8627 | 0.8532 | 0.8579 | 0.9640 |
| 0.0139 | 8.0 | 144 | 0.1386 | {'precision': 0.8839427662957074, 'recall': 0.8673946957878315, 'f1': 0.8755905511811023, 'number': 641} | {'precision': 0.856687898089172, 'recall': 0.840625, 'f1': 0.8485804416403785, 'number': 640} | 0.8703 | 0.8540 | 0.8621 | 0.9656 |
| 0.0126 | 9.0 | 162 | 0.1467 | {'precision': 0.8829113924050633, 'recall': 0.8705148205928237, 'f1': 0.8766692851531814, 'number': 641} | {'precision': 0.8541996830427893, 'recall': 0.8421875, 'f1': 0.848151062155783, 'number': 640} | 0.8686 | 0.8564 | 0.8624 | 0.9654 |
| 0.0114 | 10.0 | 180 | 0.1531 | {'precision': 0.8694968553459119, 'recall': 0.8627145085803433, 'f1': 0.8660924040720438, 'number': 641} | {'precision': 0.8472440944881889, 'recall': 0.840625, 'f1': 0.8439215686274509, 'number': 640} | 0.8584 | 0.8517 | 0.8550 | 0.9631 |
| 0.0099 | 11.0 | 198 | 0.1581 | {'precision': 0.8703125, 'recall': 0.8689547581903276, 'f1': 0.8696330991412958, 'number': 641} | {'precision': 0.8450704225352113, 'recall': 0.84375, 'f1': 0.8444096950742768, 'number': 640} | 0.8577 | 0.8564 | 0.8570 | 0.9634 |
| 0.0064 | 12.0 | 216 | 0.1543 | {'precision': 0.8866141732283465, 'recall': 0.8783151326053042, 'f1': 0.8824451410658307, 'number': 641} | {'precision': 0.8643533123028391, 'recall': 0.85625, 'f1': 0.8602825745682888, 'number': 640} | 0.8755 | 0.8673 | 0.8714 | 0.9659 |
| 0.0059 | 13.0 | 234 | 0.1628 | {'precision': 0.8732394366197183, 'recall': 0.8705148205928237, 'f1': 0.871875, 'number': 641} | {'precision': 0.8526645768025078, 'recall': 0.85, 'f1': 0.8513302034428795, 'number': 640} | 0.8630 | 0.8603 | 0.8616 | 0.9645 |
| 0.0056 | 14.0 | 252 | 0.1587 | {'precision': 0.878740157480315, 'recall': 0.8705148205928237, 'f1': 0.8746081504702194, 'number': 641} | {'precision': 0.8580441640378549, 'recall': 0.85, 'f1': 0.8540031397174254, 'number': 640} | 0.8684 | 0.8603 | 0.8643 | 0.9651 |
| 0.005 | 15.0 | 270 | 0.1583 | {'precision': 0.8818897637795275, 'recall': 0.8736349453978159, 'f1': 0.8777429467084641, 'number': 641} | {'precision': 0.861198738170347, 'recall': 0.853125, 'f1': 0.8571428571428572, 'number': 640} | 0.8716 | 0.8634 | 0.8675 | 0.9656 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
KaungHtetCho/ppo-LunarLander-v2
|
KaungHtetCho
| 2024-01-09T08:40:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T08:40:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.08 +/- 10.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
1DS/adapter-category-mapping-hp-global-Llama-2-7b-chat-hf-v1
|
1DS
| 2024-01-09T08:36:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-09T08:36:39Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
amd/ese_vovnet39b
|
amd
| 2024-01-09T08:35:03Z | 0 | 0 | null |
[
"onnx",
"RyzenAI",
"vision",
"classification",
"pytorch",
"dataset:imagenet-1k",
"arxiv:1904.09730",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T09:17:27Z |
---
license: apache-2.0
datasets:
- imagenet-1k
metrics:
- accuracy
tags:
- RyzenAI
- vision
- classification
- pytorch
---
# ESE_VoVNet39b
Quantized ESE_VoVNet39b model that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/).
## Model description
VoVNet was first introduced in the paper [An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection](https://arxiv.org/abs/1904.09730). Pretrained on ImageNet-1k in timm by Ross Wightman using RandAugment RA recipe.
The model implementation is from [timm](https://huggingface.co/timm/ese_vovnet39b.ra_in1k).
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation
Follow [ImageNet](https://huggingface.co/datasets/imagenet-1k) to prepare dataset.
### Model Evaluation
```python
python eval_onnx.py --onnx_model ese_vovnet39b_int.onnx --ipu --provider_config Path\To\vaip_config.json --data_dir /Path/To/Your/Dataset
```
### Performance
|Metric |Accuracy on IPU|
| :----: | :----: |
|Top1/Top5| 78.96% / 94.53%|
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{lee2019energy,
title = {An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection},
author = {Lee, Youngwan and Hwang, Joong-won and Lee, Sangrok and Bae, Yuseok and Park, Jongyoul},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019}
}
```
|
1DS/adapter-title-brand-mapping-Llama-2-7b-chat-hf-v1
|
1DS
| 2024-01-09T08:23:35Z | 0 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-09T08:23:35Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Infrence Function
def generate(title):
# Define the roles and markers
# Define the roles and markers
prompt = prompt = f"[INST]Identify the brand from the given product title.[/INST]\n\n<TITL> {title} </TITL>\n\n"custom prompt here
print("Prompt:")
print(prompt)
encoding = tokenizer(prompt, return_tensors="pt").to("cuda:0")
output = model.generate(input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
max_new_tokens=200,
do_sample=True,
temperature=0.01,
eos_token_id=tokenizer.eos_token_id,
top_k=0)
print()
# Subtract the length of input_ids from output to get only the model's response
output_text = tokenizer.decode(output[0, len(encoding.input_ids[0]):], skip_special_tokens=False)
output_text = re.sub('\n+', '\n', output_text) # remove excessive newline characters
print("Generated Assistant Response:")
print(output_text)
return output_text
|
amy011872/finetune-mistral-cleaner-v2
|
amy011872
| 2024-01-09T08:20:39Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T06:13:22Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: finetune-mistral-cleaner-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-mistral-cleaner-v2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7539
## Model description
A Mistral model finetuned for cleaning web source.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9886 | 0.13 | 20 | 1.7551 |
| 1.7559 | 0.27 | 40 | 1.7549 |
| 2.0012 | 0.4 | 60 | 1.7547 |
| 1.6501 | 0.53 | 80 | 1.7545 |
| 1.8329 | 0.67 | 100 | 1.7543 |
| 1.9872 | 0.8 | 120 | 1.7541 |
| 1.7711 | 0.93 | 140 | 1.7539 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/beberik-Lonepino-11B-GGUF
|
s3nh
| 2024-01-09T08:19:46Z | 1 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T07:30:13Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/beberik/Lonepino-11B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
kwaikeg/kagentlms_qwen_7b_mat_gguf
|
kwaikeg
| 2024-01-09T08:16:04Z | 25 | 3 | null |
[
"gguf",
"text-generation",
"en",
"zh",
"dataset:kwaikeg/KAgentInstruct",
"dataset:kwaikeg/KAgentBench",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T06:39:17Z |
---
license: cc-by-nc-nd-4.0
datasets:
- kwaikeg/KAgentInstruct
- kwaikeg/KAgentBench
language:
- en
- zh
pipeline_tag: text-generation
---
KwaiAgents ([Github](https://github.com/KwaiKEG/KwaiAgents)) is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). The open-sourced content includes:
1. **KAgentSys-Lite**: An experimental Agent Loop implemented based on open-source search engines, browsers, time, calendar, weather, and other tools, which is only missing the memory mechanism and some search capabilities compared to the system in the paper.
2. **KAgentLMs**: A series of large language models with Agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper.
3. **KAgentInstruct**: Fine-tuned data of instructions generated by the Meta-agent in the paper.
4. **KAgentBench**: Over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.
## User Guide
### Serving by [Lamma.cpp](https://github.com/ggerganov/llama.cpp) (CPU)
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
To install the server package and get started:
```bash
pip install llama-cpp-python[server]
python3 -m llama_cpp.server --model kagentlms_qwen_7b_mat_gguf/ggml-model-q4_0.gguf --chat_format chatml --port 8888
```
Finally, you can use the curl command to invoke the model same as the OpenAI calling format. Here's an example:
```bash
curl http://localhost:8888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
```
## Citation
```
@article{pan2023kwaiagents,
author = {Haojie Pan and
Zepeng Zhai and
Hao Yuan and
Yaojia Lv and
Ruiji Fu and
Ming Liu and
Zhongyuan Wang and
Bing Qin
},
title = {KwaiAgents: Generalized Information-seeking Agent System with Large Language Models},
journal = {CoRR},
volume = {abs/2312.04889},
year = {2023}
}
```
|
ntc-ai/SDXL-LoRA-slider.HDR-high-dynamic-range
|
ntc-ai
| 2024-01-09T08:13:20Z | 38 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-09T08:13:17Z |
---
language:
- en
thumbnail: "images/evaluate/HDR, high dynamic range.../HDR, high dynamic range_17_3.0.png"
widget:
- text: HDR, high dynamic range
output:
url: images/HDR, high dynamic range_17_3.0.png
- text: HDR, high dynamic range
output:
url: images/HDR, high dynamic range_19_3.0.png
- text: HDR, high dynamic range
output:
url: images/HDR, high dynamic range_20_3.0.png
- text: HDR, high dynamic range
output:
url: images/HDR, high dynamic range_21_3.0.png
- text: HDR, high dynamic range
output:
url: images/HDR, high dynamic range_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "HDR, high dynamic range"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - HDR, high dynamic range (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/HDR, high dynamic range_17_-3.0.png" width=256 height=256 /> | <img src="images/HDR, high dynamic range_17_0.0.png" width=256 height=256 /> | <img src="images/HDR, high dynamic range_17_3.0.png" width=256 height=256 /> |
| <img src="images/HDR, high dynamic range_19_-3.0.png" width=256 height=256 /> | <img src="images/HDR, high dynamic range_19_0.0.png" width=256 height=256 /> | <img src="images/HDR, high dynamic range_19_3.0.png" width=256 height=256 /> |
| <img src="images/HDR, high dynamic range_20_-3.0.png" width=256 height=256 /> | <img src="images/HDR, high dynamic range_20_0.0.png" width=256 height=256 /> | <img src="images/HDR, high dynamic range_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
HDR, high dynamic range
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.HDR-high-dynamic-range', weight_name='HDR, high dynamic range.safetensors', adapter_name="HDR, high dynamic range")
# Activate the LoRA
pipe.set_adapters(["HDR, high dynamic range"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, HDR, high dynamic range"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 960+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
freshpearYoon/medium3
|
freshpearYoon
| 2024-01-09T08:03:25Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-09T01:54:25Z |
---
language:
- ko
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper_medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the aihub dataset.
It achieves the following results on the evaluation set:
- Cer: 15.6625
- Loss: 1.4176
- Wer: 32.4788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:-------:|:---------------:|:-------:|
| 1.8819 | 0.01 | 100 | 11.9999 | 1.5851 | 29.7754 |
| 1.6964 | 0.02 | 200 | 14.6066 | 1.4982 | 31.2945 |
| 1.6783 | 0.02 | 300 | 14.8315 | 1.4504 | 31.7318 |
| 1.6238 | 0.03 | 400 | 15.3631 | 1.4259 | 32.1490 |
| 1.7569 | 0.04 | 500 | 15.6625 | 1.4176 | 32.4788 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.15.0
- Tokenizers 0.15.0
|
NaxGyumi/Taxi
|
NaxGyumi
| 2024-01-09T08:01:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T08:00:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="NaxGyumi/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jamesm808/ppo-LunarLander-v2
|
jamesm808
| 2024-01-09T07:52:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T09:02:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -154.64 +/- 53.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mlx-community/Llama-2-7b-WikiChat-mlx
|
mlx-community
| 2024-01-09T07:49:22Z | 2 | 0 |
mlx
|
[
"mlx",
"llama",
"en",
"license:llama2",
"region:us"
] | null | 2024-01-09T06:55:05Z |
---
language:
- en
license: llama2
tags:
- mlx
---
# Llama-2-7b-WikiChat-mlx
This model was converted to MLX format from [`stanford-oval/Llama-2-7b-WikiChat`]().
Refer to the [original model card](https://huggingface.co/stanford-oval/Llama-2-7b-WikiChat) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/Llama-2-7b-WikiChat-mlx --prompt "My name is"
```
|
baichuan-inc/Baichuan-7B
|
baichuan-inc
| 2024-01-09T07:45:22Z | 19,658 | 839 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:1910.07467",
"arxiv:2009.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T07:47:16Z |
---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
---
# Baichuan-7B
<!-- Provide a quick summary of what the model is/does. -->
Baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。
如果希望使用Baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
Baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).
If you wish to use Baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
## Why use Baichuan-7B
- 在同尺寸模型中Baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标
- Baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平
- 不同于LLaMA完全禁止商业使用,Baichuan-7B使用更宽松的开源协议,允许用于商业目的
- Among models of the same size, Baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
- Baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
- Unlike LLaMA, which completely prohibits commercial use, Baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
## How to Get Started with the Model
如下是一个使用Baichuan-7B进行1-shot推理的任务,根据作品给出作者名,正确输出为"夜雨寄北->李商隐"
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
The following is a task of performing 1-shot inference using Baichuan-7B, where the author's name is given based on the work, with the correct output being "One Hundred Years of Solitude->Gabriel Garcia Marquez"
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True)
inputs = tokenizer('Hamlet->Shakespeare\nOne Hundred Years of Solitude->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 百川智能(Baichuan Intelligent Technology)
- **Email**: opensource@baichuan-inc.com
- **Language(s) (NLP):** Chinese/English
- **License:** [Baichuan-7B License](https://huggingface.co/baichuan-inc/Baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
### Model Sources
<!-- Provide the basic links for the model. -->
整体模型基于标准的Transformer结构,我们采用了和LLaMA一样的模型设计
- **Position Embedding**:采用rotary-embedding,是现阶段被大多数模型采用的位置编码方案,具有很好的外推性。
- **Feedforward Layer**:采用SwiGLU,Feedforward变化为(8/3)倍的隐含层大小,即11008。
- **Layer Normalization**: 基于[RMSNorm](https://arxiv.org/abs/1910.07467)的Pre-Normalization。
具体参数和见下表
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
The overall model is based on the standard Transformer structure, and we have adopted the same model design as LLaMA:
- Position Embedding: We use rotary-embedding, which is the position encoding scheme adopted by most models at this stage, and it has excellent extrapolation capabilities.
- Feedforward Layer: We use SwiGLU. The feedforward changes to (8/3) times the size of the hidden layer, that is, 11008.
- Layer Normalization: Pre-Normalization based on [RMSNorm](https://arxiv.org/abs/1910.07467).
The specific parameters are as follows:
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
在没有充分评估风险和采取缓解措施的情况下投入生产使用;任何可能被视为不负责任或有害的使用案例。
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。Baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。
Baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. Baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Training Details
训练具体设置参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
For specific training settings, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
## Evaluation
### 中文评测
#### C-Eval
[CEval数据集](https://cevalbenchmark.com/index.html)是一个全面的中文基础模型评测数据集,涵盖了52个学科和四个难度的级别。我们使用该数据集的dev集作为few-shot的来源,在test集上进行了5-shot测试。
| Model 5-shot | Average | Avg(Hard) | STEM | Social Sciences | Humanities | Others |
|-----------------------------|---------|-----------|------|-----------------|------------|--------|
| GPT-4 | 68.7 | 54.9 | 67.1 | 77.6 | 64.5 | 67.8 |
| ChatGPT | 54.4 | 41.4 | 52.9 | 61.8 | 50.9 | 53.6 |
| Claude-v1.3 | 54.2 | 39.0 | 51.9 | 61.7 | 52.1 | 53.7 |
| Claude-instant-v1.0 | 45.9 | 35.5 | 43.1 | 53.8 | 44.2 | 45.4 |
| moss-moon-003-base (16B) | 27.4 | 24.5 | 27.0 | 29.1 | 27.2 | 26.9 |
| Ziya-LLaMA-13B-pretrain | 30.2 | 22.7 | 27.7 | 34.4 | 32.0 | 28.9 |
| LLaMA-7B-hf | 27.1 | 25.9 | 27.1 | 26.8 | 27.9 | 26.3 |
| ChatGLM-6B | 34.5 | 23.1 | 30.4 | 39.6 | 37.4 | 34.5 |
| Falcon-7B | 25.8 | 24.3 | 25.8 | 26.0 | 25.8 | 25.6 |
| Open-LLaMA-v2-pretrain (7B) | 24.0 | 22.5 | 23.1 | 25.3 | 25.2 | 23.2 |
| TigerBot-7B-base | 25.7 | 27.0 | 27.3 | 24.7 | 23.4 | 26.1 |
| Aquila-7B<sup>*</sup> | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 |
| BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 |
| BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 |
| **Baichuan-7B** | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 |
#### Gaokao
[Gaokao](https://github.com/ExpressAI/AI-Gaokao) 是一个以中国高考题作为评测大语言模型能力的数据集,用以评估模型的语言能力和逻辑推理能力。
我们只保留了其中的单项选择题,并对所有模型进行统一5-shot测试。
以下是测试的结果。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 21.41 |
| Ziya-LLaMA-13B-pretrain | 23.17 |
| Falcon-7B | 23.98 |
| TigerBot-7B-base | 25.94 |
| LLaMA-7B | 27.81 |
| ChatGLM-6B | 21.41 |
| BLOOM-7B | 26.96 |
| BLOOMZ-7B | 28.72 |
| Aquila-7B<sup>*</sup> | 24.39 |
| **Baichuan-7B** | **36.24** |
#### AGIEval
[AGIEval](https://github.com/microsoft/AGIEval) 旨在评估模型的认知和解决问题相关的任务中的一般能力。
我们只保留了其中的四选一单项选择题,随机划分后对所有模型进行了统一5-shot测试。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 23.49 |
| Ziya-LLaMA-13B-pretrain | 27.64 |
| Falcon-7B | 27.18 |
| TigerBot-7B-base | 25.19 |
| LLaMA-7B | 28.17 |
| ChatGLM-6B | 23.49 |
| BLOOM-7B | 26.55 |
| BLOOMZ-7B | 30.27 |
| Aquila-7B<sup>*</sup> | 25.58 |
| **Baichuan-7B** | **34.44** |
<sup>*</sup>其中Aquila模型来源于[智源官方网站](https://model.baai.ac.cn/model-detail/100098),仅做参考
### English Leaderboard
In addition to Chinese, we also tested the model's performance in English.
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300) is an English evaluation dataset that includes 57 multiple-choice tasks, covering elementary mathematics, American history, computer science, law, etc. The difficulty ranges from high school level to expert level, making it a mainstream LLM evaluation dataset.
We adopted the [open-source]((https://github.com/hendrycks/test)) evaluation scheme, and the final 5-shot results are as follows:
| Model | Humanities | Social Sciences | STEM | Other | Average |
|----------------------------------------|-----------:|:---------------:|:----:|:-----:|:-------:|
| LLaMA-7B<sup>2</sup> | 34.0 | 38.3 | 30.5 | 38.1 | 35.1 |
| Falcon-7B<sup>1</sup> | - | - | - | - | 35.0 |
| mpt-7B<sup>1</sup> | - | - | - | - | 35.6 |
| ChatGLM-6B<sup>0</sup> | 35.4 | 41.0 | 31.3 | 40.5 | 36.9 |
| BLOOM 7B<sup>0</sup> | 25.0 | 24.4 | 26.5 | 26.4 | 25.5 |
| BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
| moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
| moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
| **Baichuan-7B<sup>0</sup>** | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
The superscript in the Model column indicates the source of the results.
```
0:reimplemented
1:https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
2:https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu
```
## Our Group

|
kwaikeg/kagentlms_qwen_7b_mat
|
kwaikeg
| 2024-01-09T07:45:10Z | 42 | 15 |
transformers
|
[
"transformers",
"pytorch",
"qwen",
"feature-extraction",
"text-generation",
"custom_code",
"en",
"zh",
"dataset:kwaikeg/KAgentInstruct",
"dataset:kwaikeg/KAgentBench",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
text-generation
| 2023-11-17T06:24:12Z |
---
license: cc-by-nc-nd-4.0
datasets:
- kwaikeg/KAgentInstruct
- kwaikeg/KAgentBench
language:
- en
- zh
pipeline_tag: text-generation
---
KwaiAgents ([Github](https://github.com/KwaiKEG/KwaiAgents)) is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). The open-sourced content includes:
1. **KAgentSys-Lite**: An experimental Agent Loop implemented based on open-source search engines, browsers, time, calendar, weather, and other tools, which is only missing the memory mechanism and some search capabilities compared to the system in the paper.
2. **KAgentLMs**: A series of large language models with Agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper.
3. **KAgentInstruct**: Fine-tuned data of instructions generated by the Meta-agent in the paper.
4. **KAgentBench**: Over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.
## User Guide
### Direct usage
Tutorial can refer to [QwenLM/Qwen](https://github.com/QwenLM/Qwen)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("kwaikeg/kagentlms_qwen_7b_mat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"kwaikeg/kagentlms_qwen_7b_mat",
device_map="auto",
trust_remote_code=True
).eval()
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
```
### AgentLMs as service
#### Serving by [vLLM](https://github.com/vllm-project/vllm) (GPU)
We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
```bash
pip install vllm
pip install "fschat[model_worker,webui]"
```
To deploy KAgentLMs, you first need to start the controller in one terminal.
```bash
python -m fastchat.serve.controller
```
Secondly, you should use the following command in another terminal for single-gpu inference service deployment:
```bash
python -m fastchat.serve.vllm_worker --model-path $model_path --trust-remote-code
```
Where `$model_path` is the local path of the model downloaded. If the GPU does not support Bfloat16, you can add `--dtype half` to the command line.
Thirdly, start the REST API server in the third terminal.
```bash
python -m fastchat.serve.openai_api_server --host localhost --port 8888
```
Finally, you can use the curl command to invoke the model same as the OpenAI calling format. Here's an example:
```bash
curl http://localhost:8888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
```
#### Serving by [Lamma.cpp](https://github.com/ggerganov/llama.cpp) (CPU)
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc). The converted model can be found in [kwaikeg/kagentlms_qwen_7b_mat_gguf](https://huggingface.co/kwaikeg/kagentlms_qwen_7b_mat_gguf).
To install the server package and get started:
```bash
pip install "llama-cpp-python[server]"
python3 -m llama_cpp.server --model kagentlms_qwen_7b_mat_gguf/ggml-model-q4_0.gguf --chat_format chatml --port 8888
```
### Citation
```
@article{pan2023kwaiagents,
author = {Haojie Pan and
Zepeng Zhai and
Hao Yuan and
Yaojia Lv and
Ruiji Fu and
Ming Liu and
Zhongyuan Wang and
Bing Qin
},
title = {KwaiAgents: Generalized Information-seeking Agent System with Large Language Models},
journal = {CoRR},
volume = {abs/2312.04889},
year = {2023}
}
```
|
kar-saaragh/a2c-PandaPickAndPlace-v3
|
kar-saaragh
| 2024-01-09T07:43:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T07:38:27Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LI-ST/Mistral-7B-ko-v0.005
|
LI-ST
| 2024-01-09T07:36:16Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T10:22:04Z |
---
license: cc-by-nc-nd-4.0
language:
- en
- ko
library_name: transformers
pipeline_tag: text-generation
---
<p><h1>Mistral-7B-ko</h1></p>
basemodel: Open-Orca/Mistral-7B-OpenOrca
=================================================
<BR>
This model is a temporary model for testing.
<BR>
We will be deleting it soon.
<BR>
=================================================
|
zxhezexin/openlrm-large-obj-1.0
|
zxhezexin
| 2024-01-09T07:32:56Z | 7 | 5 |
transformers
|
[
"transformers",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-01-09T05:59:51Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM
## Overview
This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
## Model Details
| Model | Training Data | Layers | Feat. Dim | Trip. Dim. | Render Res. | Link |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| openlrm-small-obj-1.0 | Objaverse | 12 | 768 | 32 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-small-obj-1.0) |
| openlrm-base-obj-1.0 | Objaverse | 12 | 1024 | 40 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-base-obj-1.0) |
| openlrm-large-obj-1.0 | Objaverse | 16 | 1024 | 80 | 384 | [HF](https://huggingface.co/zxhezexin/openlrm-large-obj-1.0) |
| openlrm-small | Objaverse + MVImgNet | 12 | 768 | 32 | 192 | To be released |
| openlrm-base | Objaverse + MVImgNet | 12 | 1024 | 40 | 192 | To be released |
| openlrm-large | Objaverse + MVImgNet | 16 | 1024 | 80 | 384 | To be released |
## Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
zxhezexin/openlrm-small-obj-1.0
|
zxhezexin
| 2024-01-09T07:32:35Z | 41 | 6 |
transformers
|
[
"transformers",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-01-09T05:56:48Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM
## Overview
This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
## Model Details
| Model | Training Data | Layers | Feat. Dim | Trip. Dim. | Render Res. | Link |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| openlrm-small-obj-1.0 | Objaverse | 12 | 768 | 32 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-small-obj-1.0) |
| openlrm-base-obj-1.0 | Objaverse | 12 | 1024 | 40 | 192 | [HF](https://huggingface.co/zxhezexin/openlrm-base-obj-1.0) |
| openlrm-large-obj-1.0 | Objaverse | 16 | 1024 | 80 | 384 | [HF](https://huggingface.co/zxhezexin/openlrm-large-obj-1.0) |
| openlrm-small | Objaverse + MVImgNet | 12 | 768 | 32 | 192 | To be released |
| openlrm-base | Objaverse + MVImgNet | 12 | 1024 | 40 | 192 | To be released |
| openlrm-large | Objaverse + MVImgNet | 16 | 1024 | 80 | 384 | To be released |
## Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
ProjectsbyGaurav/donut-base-gaurav-receipt-epoch-5
|
ProjectsbyGaurav
| 2024-01-09T07:28:52Z | 36 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:podbilabs/wildreceipt-donut",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-01-09T05:50:46Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- podbilabs/wildreceipt-donut
model-index:
- name: donut-base-gaurav-receipt-epoch-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-gaurav-receipt-epoch-5
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
NaxGyumi/q-FrozenLake-v1-4x4-noSlippery
|
NaxGyumi
| 2024-01-09T07:25:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T07:24:55Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NaxGyumi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
s3nh/tenyx-TenyxChat-7B-v1-GGUF
|
s3nh
| 2024-01-09T07:11:46Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T07:11:44Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/tenyx/TenyxChat-7B-v1).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
acedev003/llama-2-coder-7b
|
acedev003
| 2024-01-09T07:11:04Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"code",
"coding",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T07:04:33Z |
---
tags:
- generated_from_trainer
- code
- coding
- llama
model-index:
- name: Llama-2-coder-7b
results: []
license: apache-2.0
language:
- code
thumbnail: https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/llama-2-coder-7b/resolve/main/llama2-coder-logo-removebg-preview.png" alt="llama-2 coder logo"">
</div>
# LlaMa 2 Coder 🦙👩💻
**LlaMa-2 7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
## Model description 🧠
[Llama-2](https://huggingface.co/meta-llama/Llama-2-7b)
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
## Training and evaluation data 📚
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
### Training hyperparameters ⚙
```py
optim="paged_adamw_32bit",
num_train_epochs = 2,
eval_steps=50,
save_steps=50,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=2,
seed=66,
load_best_model_at_end=True,
logging_steps=1,
learning_rate=2e-4,
fp16=True,
bf16=False,
max_grad_norm=0.3,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="constant"
```
### Training results 🗒️
| Step | Training Loss | Validation Loss |
|------|----------|----------|
| 50 | 0.624400 | 0.600070 |
| 100 | 0.634100 | 0.592757 |
| 150 | 0.545800 | 0.586652 |
| 200 | 0.572500 | 0.577525 |
| 250 | 0.528000 | 0.590118 |
### Eval results 📊
WIP
### Example of usage 👩💻
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_id = "mrm8488/llama-2-coder-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
def create_prompt(instruction):
system = "You are a coding assistant that will help the user to resolve the following instruction:"
instruction = "### Instruction: " + instruction
return system + "\n" + instruction + "\n\n" + "### Solution:" + "\n"
def generate(
instruction,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs,
):
prompt = create_prompt(instruction)
print(prompt)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Solution:")[1].lstrip("\n")
instruction = """
Edit the following XML code to add a navigation bar to the top of a web page
<html>
<head>
<title>CliBrAIn</title>
</head>
"""
print(generate(instruction))
```
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { llama-2-coder-7b (Revision d30d193) },
year = 2023,
url = { https://huggingface.co/mrm8488/llama-2-coder-7b },
doi = { 10.57967/hf/0931 },
publisher = { Hugging Face }
}
```
|
Zienab/wav
|
Zienab
| 2024-01-09T07:06:37Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-08T11:49:55Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
model-index:
- name: wav
results: []
language:
- ar
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
DavideTHU/SDXL_LoRA_macbook2
|
DavideTHU
| 2024-01-09T06:58:43Z | 7 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-09T06:22:50Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'photo of a <s0><s1> laptop'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a <s0><s1> laptop
license: openrail++
---
# SDXL LoRA DreamBooth - DavideTHU/SDXL_LoRA_macbook2
<Gallery />
## Model description
### These are DavideTHU/SDXL_LoRA_macbook2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`SDXL_LoRA_macbook2.safetensors` here 💾](/DavideTHU/SDXL_LoRA_macbook2/blob/main/SDXL_LoRA_macbook2.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:SDXL_LoRA_macbook2:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`SDXL_LoRA_macbook2_emb.safetensors` here 💾](/DavideTHU/SDXL_LoRA_macbook2/blob/main/SDXL_LoRA_macbook2_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `SDXL_LoRA_macbook2_emb` to your prompt. For example, `photo of a SDXL_LoRA_macbook2_emb laptop`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DavideTHU/SDXL_LoRA_macbook2', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='DavideTHU/SDXL_LoRA_macbook2', filename='SDXL_LoRA_macbook2_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('photo of a <s0><s1> laptop').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/DavideTHU/SDXL_LoRA_macbook2/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
pranitamahajan/falcon7binstruct
|
pranitamahajan
| 2024-01-09T06:57:22Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T06:29:51Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: tiiuae/falcon-7b
model-index:
- name: falcon7binstruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7binstruct
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 10
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
kar-saaragh/a2c-PandaReachDense-v3
|
kar-saaragh
| 2024-01-09T06:39:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T06:35:02Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/arabart-zaebuc-gec-ged-13
|
CAMeL-Lab
| 2024-01-09T06:35:07Z | 150 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"ar",
"arxiv:2305.14734",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-09T12:34:53Z |
---
license: mit
language:
- ar
---
# AraBART+Morph+GEC<sup>13</sup> ZAEBUC Model
## Model description
**AraBART+Morph+GEC<sup>13</sup>** is a Modern Standard Arabic (MSA) grammatical error correction (GEC) model that was built by fine-tuning the [AraBART](https://huggingface.co/moussaKam/AraBART) model.
For the fine-tuning, we used the [QALB-2015](https://aclanthology.org/W14-3605.pdf), [QALB-2015](https://aclanthology.org/W15-3204.pdf), and [ZAEBUC](https://aclanthology.org/2022.lrec-1.9.pdf) datasets. Please note that this model was fine-tuned on morphologically preprocessed text.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[Advancements in Arabic Grammatical Error Detection and Correction:
An Empirical Investigation](https://arxiv.org/abs/2305.14734)."* Our fine-tuning code and data can be found [here](https://github.com/CAMeL-Lab/arabic-gec).
## Intended uses
You can use the AraBART+Morph+GEC<sup>13</sup> model as part of an extended version of the [transformers](https://github.com/CAMeL-Lab/arabic-gec) that we make publicly available.
The GEC model is intended to be used with this [GED](https://huggingface.co/CAMeL-Lab/camelbert-msa-zaebuc-ged-13) model as we outlined in the example below.
We used this GEC model to report results on the ZAEBUC dev and test sets in our [paper](https://arxiv.org/abs/2305.14734).
#### How to use
To use the model with our extended version of transformers:
```python
from transformers import AutoTokenizer, BertForTokenClassification, MBartForConditionalGeneration
from camel_tools.disambig.bert import BERTUnfactoredDisambiguator
from camel_tools.utils.dediac import dediac_ar
import torch.nn.functional as F
import torch
bert_disambig = BERTUnfactoredDisambiguator.pretrained()
ged_tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/camelbert-msa-zaebuc-ged-13')
ged_model = BertForTokenClassification.from_pretrained('CAMeL-Lab/camelbert-msa-zaebuc-ged-13')
gec_tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/arabart-zaebuc-gec-ged-13')
gec_model = MBartForConditionalGeneration.from_pretrained('CAMeL-Lab/arabart-zaebuc-gec-ged-13')
text = 'و قال له انه يحب اكل الطعام بكثره .'
# morph processing the input text
text_disambig = bert_disambig.disambiguate(text.split())
morph_pp_text = [dediac_ar(w_disambig.analyses[0].analysis['diac']) for w_disambig in text_disambig]
morph_pp_text = ' '.join(morph_pp_text)
# GED tagging
inputs = ged_tokenizer([morph_pp_text], return_tensors='pt')
logits = ged_model(**inputs).logits
preds = F.softmax(logits, dim=-1).squeeze()[1:-1]
pred_ged_labels = [ged_model.config.id2label[p.item()] for p in torch.argmax(preds, -1)]
# Extending GED label to GEC-tokenized input
ged_label2ids = gec_model.config.ged_label2id
tokens, ged_labels = [], []
for word, label in zip(morph_pp_text.split(), pred_ged_labels):
word_tokens = gec_tokenizer.tokenize(word)
if len(word_tokens) > 0:
tokens.extend(word_tokens)
ged_labels.extend([label for _ in range(len(word_tokens))])
input_ids = gec_tokenizer.convert_tokens_to_ids(tokens)
input_ids = [gec_tokenizer.bos_token_id] + input_ids + [gec_tokenizer.eos_token_id]
label_ids = [ged_label2ids.get(label, ged_label2ids['<pad>']) for label in ged_labels]
label_ids = [ged_label2ids['UC']] + label_ids + [ged_label2ids['UC']]
attention_mask = [1 for _ in range(len(input_ids))]
gen_kwargs = {'num_beams': 5, 'max_length': 100,
'num_return_sequences': 1,
'no_repeat_ngram_size': 0, 'early_stopping': False,
'ged_tags': torch.tensor([label_ids]),
'attention_mask': torch.tensor([attention_mask])
}
# GEC generation
generated = gec_model.generate(torch.tensor([input_ids]), **gen_kwargs)
generated_text = gec_tokenizer.batch_decode(generated,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
print(generated_text) # وقال له أنه يحب أكل الطعام بكثرة .
```
## Citation
```bibtex
@inproceedings{alhafni-etal-2023-advancements,
title = "Advancements in {A}rabic Grammatical Error Detection and Correction: An Empirical Investigation",
author = "Alhafni, Bashar and
Inoue, Go and
Khairallah, Christian and
Habash, Nizar",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.396",
doi = "10.18653/v1/2023.emnlp-main.396",
pages = "6430--6448",
abstract = "Grammatical error correction (GEC) is a well-explored problem in English with many existing models and datasets. However, research on GEC in morphologically rich languages has been limited due to challenges such as data scarcity and language complexity. In this paper, we present the first results on Arabic GEC using two newly developed Transformer-based pretrained sequence-to-sequence models. We also define the task of multi-class Arabic grammatical error detection (GED) and present the first results on multi-class Arabic GED. We show that using GED information as auxiliary input in GEC models improves GEC performance across three datasets spanning different genres. Moreover, we also investigate the use of contextual morphological preprocessing in aiding GEC systems. Our models achieve SOTA results on two Arabic GEC shared task datasets and establish a strong benchmark on a recently created dataset. We make our code, data, and pretrained models publicly available.",
}
```
|
akashvshroff/mistral-7b-midjourney
|
akashvshroff
| 2024-01-09T06:14:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-09T05:10:10Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
AlephNull/Reinforce-CartPole-v1
|
AlephNull
| 2024-01-09T06:13:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T06:01:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
amazingvince/chess-llama-smol-1024
|
amazingvince
| 2024-01-09T06:10:11Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T07:00:26Z |
---
base_model: chess-llama/config.json
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mini-1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-1024
This model is a fine-tuned version of [chess-llama/config.json](https://huggingface.co/chess-llama/config.json) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4399
- Accuracy: 0.7228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 8326
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.1877 | 0.0 | 200 | 2.1894 | 0.5763 |
| 1.612 | 0.01 | 400 | 1.5928 | 0.6109 |
| 1.2645 | 0.01 | 600 | 1.2604 | 0.6379 |
| 1.0711 | 0.01 | 800 | 1.0720 | 0.6547 |
| 0.953 | 0.02 | 1000 | 0.9513 | 0.6628 |
| 0.9024 | 0.02 | 1200 | 0.8965 | 0.6678 |
| 0.8682 | 0.02 | 1400 | 0.8618 | 0.6712 |
| 0.8366 | 0.03 | 1600 | 0.8343 | 0.6741 |
| 0.8127 | 0.03 | 1800 | 0.8146 | 0.6763 |
| 0.7916 | 0.03 | 2000 | 0.7940 | 0.6784 |
| 0.781 | 0.04 | 2200 | 0.7815 | 0.6799 |
| 0.7647 | 0.04 | 2400 | 0.7692 | 0.6813 |
| 0.7617 | 0.04 | 2600 | 0.7589 | 0.6825 |
| 0.7523 | 0.05 | 2800 | 0.7466 | 0.6842 |
| 0.7394 | 0.05 | 3000 | 0.7373 | 0.6852 |
| 0.7297 | 0.05 | 3200 | 0.7279 | 0.6864 |
| 0.712 | 0.06 | 3400 | 0.7206 | 0.6871 |
| 0.716 | 0.06 | 3600 | 0.7120 | 0.6884 |
| 0.6994 | 0.06 | 3800 | 0.7044 | 0.6893 |
| 0.6885 | 0.07 | 4000 | 0.6969 | 0.6903 |
| 0.6887 | 0.07 | 4200 | 0.6916 | 0.6908 |
| 0.6812 | 0.07 | 4400 | 0.6840 | 0.6917 |
| 0.6799 | 0.08 | 4600 | 0.6774 | 0.6927 |
| 0.6672 | 0.08 | 4800 | 0.6718 | 0.6935 |
| 0.6658 | 0.08 | 5000 | 0.6653 | 0.6939 |
| 0.6455 | 0.09 | 5200 | 0.6609 | 0.6948 |
| 0.661 | 0.09 | 5400 | 0.6569 | 0.6953 |
| 0.648 | 0.09 | 5600 | 0.6505 | 0.6960 |
| 0.6453 | 0.1 | 5800 | 0.6458 | 0.6967 |
| 0.6374 | 0.1 | 6000 | 0.6407 | 0.6973 |
| 0.6351 | 0.1 | 6200 | 0.6363 | 0.6977 |
| 0.6273 | 0.11 | 6400 | 0.6328 | 0.6983 |
| 0.6234 | 0.11 | 6600 | 0.6292 | 0.6987 |
| 0.6204 | 0.12 | 6800 | 0.6247 | 0.6992 |
| 0.6179 | 0.12 | 7000 | 0.6217 | 0.6994 |
| 0.6122 | 0.12 | 7200 | 0.6169 | 0.7001 |
| 0.6096 | 0.13 | 7400 | 0.6132 | 0.7006 |
| 0.6046 | 0.13 | 7600 | 0.6101 | 0.7011 |
| 0.5997 | 0.13 | 7800 | 0.6072 | 0.7016 |
| 0.5988 | 0.14 | 8000 | 0.6047 | 0.7015 |
| 0.5995 | 0.14 | 8200 | 0.6011 | 0.7022 |
| 0.6017 | 0.14 | 8400 | 0.5985 | 0.7024 |
| 0.5962 | 0.15 | 8600 | 0.5944 | 0.7028 |
| 0.5857 | 0.15 | 8800 | 0.5919 | 0.7034 |
| 0.5829 | 0.15 | 9000 | 0.5903 | 0.7034 |
| 0.5862 | 0.16 | 9200 | 0.5856 | 0.7040 |
| 0.5786 | 0.16 | 9400 | 0.5834 | 0.7044 |
| 0.5785 | 0.16 | 9600 | 0.5813 | 0.7044 |
| 0.5819 | 0.17 | 9800 | 0.5788 | 0.7049 |
| 0.5804 | 0.17 | 10000 | 0.5768 | 0.7051 |
| 0.5755 | 0.17 | 10200 | 0.5748 | 0.7053 |
| 0.57 | 0.18 | 10400 | 0.5728 | 0.7057 |
| 0.567 | 0.18 | 10600 | 0.5699 | 0.7059 |
| 0.5629 | 0.18 | 10800 | 0.5672 | 0.7063 |
| 0.5615 | 0.19 | 11000 | 0.5648 | 0.7066 |
| 0.5628 | 0.19 | 11200 | 0.5633 | 0.7067 |
| 0.5628 | 0.19 | 11400 | 0.5610 | 0.7070 |
| 0.5551 | 0.2 | 11600 | 0.5588 | 0.7075 |
| 0.5572 | 0.2 | 11800 | 0.5558 | 0.7078 |
| 0.5543 | 0.2 | 12000 | 0.5559 | 0.7076 |
| 0.5512 | 0.21 | 12200 | 0.5536 | 0.7080 |
| 0.5491 | 0.21 | 12400 | 0.5517 | 0.7081 |
| 0.5455 | 0.21 | 12600 | 0.5494 | 0.7085 |
| 0.5494 | 0.22 | 12800 | 0.5480 | 0.7085 |
| 0.5438 | 0.22 | 13000 | 0.5461 | 0.7087 |
| 0.5492 | 0.22 | 13200 | 0.5449 | 0.7090 |
| 0.5385 | 0.23 | 13400 | 0.5432 | 0.7092 |
| 0.5399 | 0.23 | 13600 | 0.5411 | 0.7094 |
| 0.5416 | 0.23 | 13800 | 0.5406 | 0.7095 |
| 0.5316 | 0.24 | 14000 | 0.5379 | 0.7099 |
| 0.5305 | 0.24 | 14200 | 0.5367 | 0.7102 |
| 0.5349 | 0.24 | 14400 | 0.5337 | 0.7106 |
| 0.5313 | 0.25 | 14600 | 0.5329 | 0.7104 |
| 0.5336 | 0.25 | 14800 | 0.5324 | 0.7107 |
| 0.529 | 0.25 | 15000 | 0.5306 | 0.7107 |
| 0.5283 | 0.26 | 15200 | 0.5291 | 0.7109 |
| 0.5241 | 0.26 | 15400 | 0.5277 | 0.7111 |
| 0.5298 | 0.26 | 15600 | 0.5265 | 0.7113 |
| 0.5199 | 0.27 | 15800 | 0.5255 | 0.7113 |
| 0.5303 | 0.27 | 16000 | 0.5237 | 0.7116 |
| 0.5184 | 0.27 | 16200 | 0.5228 | 0.7118 |
| 0.5171 | 0.28 | 16400 | 0.5206 | 0.7122 |
| 0.525 | 0.28 | 16600 | 0.5205 | 0.7122 |
| 0.5191 | 0.28 | 16800 | 0.5191 | 0.7123 |
| 0.5161 | 0.29 | 17000 | 0.5182 | 0.7124 |
| 0.5205 | 0.29 | 17200 | 0.5160 | 0.7126 |
| 0.5157 | 0.29 | 17400 | 0.5156 | 0.7128 |
| 0.5071 | 0.3 | 17600 | 0.5140 | 0.7129 |
| 0.5151 | 0.3 | 17800 | 0.5129 | 0.7130 |
| 0.5127 | 0.3 | 18000 | 0.5124 | 0.7130 |
| 0.5098 | 0.31 | 18200 | 0.5112 | 0.7133 |
| 0.5099 | 0.31 | 18400 | 0.5104 | 0.7134 |
| 0.5056 | 0.31 | 18600 | 0.5084 | 0.7135 |
| 0.5093 | 0.32 | 18800 | 0.5078 | 0.7138 |
| 0.5033 | 0.32 | 19000 | 0.5069 | 0.7139 |
| 0.5013 | 0.32 | 19200 | 0.5063 | 0.7139 |
| 0.5087 | 0.33 | 19400 | 0.5049 | 0.7140 |
| 0.5041 | 0.33 | 19600 | 0.5037 | 0.7144 |
| 0.4994 | 0.34 | 19800 | 0.5035 | 0.7144 |
| 0.5025 | 0.34 | 20000 | 0.5027 | 0.7144 |
| 0.5005 | 0.34 | 20200 | 0.5020 | 0.7144 |
| 0.4972 | 0.35 | 20400 | 0.5012 | 0.7147 |
| 0.5047 | 0.35 | 20600 | 0.5005 | 0.7145 |
| 0.4986 | 0.35 | 20800 | 0.4995 | 0.7148 |
| 0.497 | 0.36 | 21000 | 0.4982 | 0.7150 |
| 0.4986 | 0.36 | 21200 | 0.4971 | 0.7151 |
| 0.4918 | 0.36 | 21400 | 0.4967 | 0.7152 |
| 0.5001 | 0.37 | 21600 | 0.4961 | 0.7153 |
| 0.4934 | 0.37 | 21800 | 0.4952 | 0.7154 |
| 0.4948 | 0.37 | 22000 | 0.4947 | 0.7155 |
| 0.4878 | 0.38 | 22200 | 0.4930 | 0.7157 |
| 0.4913 | 0.38 | 22400 | 0.4926 | 0.7157 |
| 0.487 | 0.38 | 22600 | 0.4921 | 0.7158 |
| 0.4919 | 0.39 | 22800 | 0.4913 | 0.7158 |
| 0.4904 | 0.39 | 23000 | 0.4907 | 0.7160 |
| 0.4863 | 0.39 | 23200 | 0.4903 | 0.7161 |
| 0.4858 | 0.4 | 23400 | 0.4896 | 0.7160 |
| 0.487 | 0.4 | 23600 | 0.4891 | 0.7163 |
| 0.4869 | 0.4 | 23800 | 0.4879 | 0.7163 |
| 0.4851 | 0.41 | 24000 | 0.4869 | 0.7164 |
| 0.4903 | 0.41 | 24200 | 0.4867 | 0.7164 |
| 0.4845 | 0.41 | 24400 | 0.4856 | 0.7166 |
| 0.4797 | 0.42 | 24600 | 0.4852 | 0.7168 |
| 0.4799 | 0.42 | 24800 | 0.4850 | 0.7168 |
| 0.4751 | 0.42 | 25000 | 0.4843 | 0.7168 |
| 0.4745 | 0.43 | 25200 | 0.4836 | 0.7170 |
| 0.4811 | 0.43 | 25400 | 0.4827 | 0.7170 |
| 0.4805 | 0.43 | 25600 | 0.4828 | 0.7171 |
| 0.483 | 0.44 | 25800 | 0.4821 | 0.7171 |
| 0.4825 | 0.44 | 26000 | 0.4815 | 0.7172 |
| 0.4749 | 0.44 | 26200 | 0.4803 | 0.7173 |
| 0.477 | 0.45 | 26400 | 0.4796 | 0.7174 |
| 0.476 | 0.45 | 26600 | 0.4792 | 0.7176 |
| 0.4776 | 0.45 | 26800 | 0.4790 | 0.7175 |
| 0.4811 | 0.46 | 27000 | 0.4780 | 0.7177 |
| 0.4761 | 0.46 | 27200 | 0.4776 | 0.7177 |
| 0.4727 | 0.46 | 27400 | 0.4771 | 0.7178 |
| 0.4737 | 0.47 | 27600 | 0.4761 | 0.7179 |
| 0.4722 | 0.47 | 27800 | 0.4760 | 0.7180 |
| 0.4713 | 0.47 | 28000 | 0.4753 | 0.7182 |
| 0.4711 | 0.48 | 28200 | 0.4747 | 0.7182 |
| 0.4738 | 0.48 | 28400 | 0.4750 | 0.7182 |
| 0.4764 | 0.48 | 28600 | 0.4738 | 0.7183 |
| 0.4725 | 0.49 | 28800 | 0.4727 | 0.7184 |
| 0.468 | 0.49 | 29000 | 0.4728 | 0.7184 |
| 0.4727 | 0.49 | 29200 | 0.4719 | 0.7185 |
| 0.4662 | 0.5 | 29400 | 0.4721 | 0.7186 |
| 0.4655 | 0.5 | 29600 | 0.4711 | 0.7186 |
| 0.4759 | 0.5 | 29800 | 0.4709 | 0.7187 |
| 0.4647 | 0.51 | 30000 | 0.4706 | 0.7187 |
| 0.4738 | 0.51 | 30200 | 0.4703 | 0.7187 |
| 0.4751 | 0.51 | 30400 | 0.4694 | 0.7190 |
| 0.4677 | 0.52 | 30600 | 0.4690 | 0.7189 |
| 0.4605 | 0.52 | 30800 | 0.4687 | 0.7189 |
| 0.466 | 0.52 | 31000 | 0.4686 | 0.7189 |
| 0.4626 | 0.53 | 31200 | 0.4678 | 0.7191 |
| 0.4616 | 0.53 | 31400 | 0.4672 | 0.7192 |
| 0.4698 | 0.53 | 31600 | 0.4665 | 0.7192 |
| 0.4599 | 0.54 | 31800 | 0.4664 | 0.7193 |
| 0.4618 | 0.54 | 32000 | 0.4660 | 0.7192 |
| 0.459 | 0.55 | 32200 | 0.4659 | 0.7192 |
| 0.4608 | 0.55 | 32400 | 0.4654 | 0.7194 |
| 0.4623 | 0.55 | 32600 | 0.4651 | 0.7194 |
| 0.4654 | 0.56 | 32800 | 0.4646 | 0.7196 |
| 0.4585 | 0.56 | 33000 | 0.4638 | 0.7195 |
| 0.4597 | 0.56 | 33200 | 0.4636 | 0.7196 |
| 0.4568 | 0.57 | 33400 | 0.4631 | 0.7198 |
| 0.4634 | 0.57 | 33600 | 0.4630 | 0.7197 |
| 0.4648 | 0.57 | 33800 | 0.4624 | 0.7197 |
| 0.4609 | 0.58 | 34000 | 0.4621 | 0.7198 |
| 0.4619 | 0.58 | 34200 | 0.4620 | 0.7197 |
| 0.4603 | 0.58 | 34400 | 0.4614 | 0.7199 |
| 0.4566 | 0.59 | 34600 | 0.4611 | 0.7199 |
| 0.4581 | 0.59 | 34800 | 0.4604 | 0.7201 |
| 0.4583 | 0.59 | 35000 | 0.4599 | 0.7202 |
| 0.4607 | 0.6 | 35200 | 0.4597 | 0.7202 |
| 0.4533 | 0.6 | 35400 | 0.4592 | 0.7202 |
| 0.4619 | 0.6 | 35600 | 0.4591 | 0.7202 |
| 0.46 | 0.61 | 35800 | 0.4585 | 0.7203 |
| 0.4516 | 0.61 | 36000 | 0.4582 | 0.7203 |
| 0.457 | 0.61 | 36200 | 0.4582 | 0.7203 |
| 0.4544 | 0.62 | 36400 | 0.4576 | 0.7204 |
| 0.4515 | 0.62 | 36600 | 0.4569 | 0.7205 |
| 0.4573 | 0.62 | 36800 | 0.4568 | 0.7206 |
| 0.4517 | 0.63 | 37000 | 0.4565 | 0.7206 |
| 0.4529 | 0.63 | 37200 | 0.4559 | 0.7207 |
| 0.4562 | 0.63 | 37400 | 0.4560 | 0.7207 |
| 0.4586 | 0.64 | 37600 | 0.4556 | 0.7207 |
| 0.4561 | 0.64 | 37800 | 0.4549 | 0.7208 |
| 0.4566 | 0.64 | 38000 | 0.4548 | 0.7207 |
| 0.4487 | 0.65 | 38200 | 0.4545 | 0.7207 |
| 0.452 | 0.65 | 38400 | 0.4542 | 0.7209 |
| 0.4529 | 0.65 | 38600 | 0.4540 | 0.7209 |
| 0.4473 | 0.66 | 38800 | 0.4542 | 0.7209 |
| 0.4479 | 0.66 | 39000 | 0.4533 | 0.7210 |
| 0.4494 | 0.66 | 39200 | 0.4530 | 0.7211 |
| 0.4502 | 0.67 | 39400 | 0.4527 | 0.7211 |
| 0.4496 | 0.67 | 39600 | 0.4523 | 0.7211 |
| 0.4492 | 0.67 | 39800 | 0.4520 | 0.7213 |
| 0.4491 | 0.68 | 40000 | 0.4518 | 0.7211 |
| 0.4499 | 0.68 | 40200 | 0.4514 | 0.7212 |
| 0.4477 | 0.68 | 40400 | 0.4514 | 0.7213 |
| 0.4448 | 0.69 | 40600 | 0.4511 | 0.7213 |
| 0.4526 | 0.69 | 40800 | 0.4506 | 0.7214 |
| 0.4425 | 0.69 | 41000 | 0.4504 | 0.7214 |
| 0.4506 | 0.7 | 41200 | 0.4501 | 0.7214 |
| 0.4492 | 0.7 | 41400 | 0.4498 | 0.7216 |
| 0.4481 | 0.7 | 41600 | 0.4495 | 0.7215 |
| 0.451 | 0.71 | 41800 | 0.4494 | 0.7216 |
| 0.4479 | 0.71 | 42000 | 0.4493 | 0.7215 |
| 0.4546 | 0.71 | 42200 | 0.4489 | 0.7216 |
| 0.4439 | 0.72 | 42400 | 0.4489 | 0.7217 |
| 0.4454 | 0.72 | 42600 | 0.4487 | 0.7217 |
| 0.4508 | 0.72 | 42800 | 0.4484 | 0.7217 |
| 0.448 | 0.73 | 43000 | 0.4483 | 0.7217 |
| 0.447 | 0.73 | 43200 | 0.4479 | 0.7217 |
| 0.4508 | 0.73 | 43400 | 0.4477 | 0.7217 |
| 0.4397 | 0.74 | 43600 | 0.4473 | 0.7218 |
| 0.4453 | 0.74 | 43800 | 0.4473 | 0.7219 |
| 0.4479 | 0.74 | 44000 | 0.4469 | 0.7219 |
| 0.4421 | 0.75 | 44200 | 0.4466 | 0.7220 |
| 0.4479 | 0.75 | 44400 | 0.4464 | 0.7220 |
| 0.4492 | 0.75 | 44600 | 0.4463 | 0.7220 |
| 0.4466 | 0.76 | 44800 | 0.4460 | 0.7221 |
| 0.4543 | 0.76 | 45000 | 0.4458 | 0.7221 |
| 0.4452 | 0.77 | 45200 | 0.4456 | 0.7221 |
| 0.4456 | 0.77 | 45400 | 0.4454 | 0.7221 |
| 0.4455 | 0.77 | 45600 | 0.4452 | 0.7221 |
| 0.4405 | 0.78 | 45800 | 0.4451 | 0.7221 |
| 0.4449 | 0.78 | 46000 | 0.4448 | 0.7223 |
| 0.4433 | 0.78 | 46200 | 0.4447 | 0.7223 |
| 0.445 | 0.79 | 46400 | 0.4447 | 0.7223 |
| 0.447 | 0.79 | 46600 | 0.4444 | 0.7223 |
| 0.4405 | 0.79 | 46800 | 0.4444 | 0.7222 |
| 0.4434 | 0.8 | 47000 | 0.4443 | 0.7222 |
| 0.4385 | 0.8 | 47200 | 0.4440 | 0.7223 |
| 0.442 | 0.8 | 47400 | 0.4439 | 0.7223 |
| 0.4402 | 0.81 | 47600 | 0.4437 | 0.7224 |
| 0.4368 | 0.81 | 47800 | 0.4437 | 0.7224 |
| 0.4392 | 0.81 | 48000 | 0.4435 | 0.7223 |
| 0.439 | 0.82 | 48200 | 0.4434 | 0.7225 |
| 0.4407 | 0.82 | 48400 | 0.4431 | 0.7225 |
| 0.4484 | 0.82 | 48600 | 0.4430 | 0.7225 |
| 0.4419 | 0.83 | 48800 | 0.4430 | 0.7224 |
| 0.4453 | 0.83 | 49000 | 0.4426 | 0.7225 |
| 0.4415 | 0.83 | 49200 | 0.4425 | 0.7225 |
| 0.4424 | 0.84 | 49400 | 0.4425 | 0.7225 |
| 0.4389 | 0.84 | 49600 | 0.4423 | 0.7226 |
| 0.4377 | 0.84 | 49800 | 0.4421 | 0.7226 |
| 0.4388 | 0.85 | 50000 | 0.4420 | 0.7227 |
| 0.4409 | 0.85 | 50200 | 0.4419 | 0.7225 |
| 0.442 | 0.85 | 50400 | 0.4417 | 0.7227 |
| 0.4371 | 0.86 | 50600 | 0.4417 | 0.7227 |
| 0.4384 | 0.86 | 50800 | 0.4415 | 0.7226 |
| 0.4402 | 0.86 | 51000 | 0.4415 | 0.7227 |
| 0.4375 | 0.87 | 51200 | 0.4414 | 0.7227 |
| 0.4367 | 0.87 | 51400 | 0.4413 | 0.7227 |
| 0.4447 | 0.87 | 51600 | 0.4412 | 0.7227 |
| 0.4434 | 0.88 | 51800 | 0.4411 | 0.7227 |
| 0.4357 | 0.88 | 52000 | 0.4411 | 0.7228 |
| 0.4404 | 0.88 | 52200 | 0.4410 | 0.7228 |
| 0.4369 | 0.89 | 52400 | 0.4409 | 0.7228 |
| 0.4348 | 0.89 | 52600 | 0.4409 | 0.7228 |
| 0.4394 | 0.89 | 52800 | 0.4408 | 0.7227 |
| 0.437 | 0.9 | 53000 | 0.4407 | 0.7227 |
| 0.438 | 0.9 | 53200 | 0.4407 | 0.7228 |
| 0.4421 | 0.9 | 53400 | 0.4406 | 0.7228 |
| 0.4421 | 0.91 | 53600 | 0.4405 | 0.7228 |
| 0.4361 | 0.91 | 53800 | 0.4405 | 0.7228 |
| 0.4367 | 0.91 | 54000 | 0.4404 | 0.7228 |
| 0.4371 | 0.92 | 54200 | 0.4403 | 0.7228 |
| 0.4349 | 0.92 | 54400 | 0.4403 | 0.7229 |
| 0.4432 | 0.92 | 54600 | 0.4403 | 0.7229 |
| 0.4355 | 0.93 | 54800 | 0.4402 | 0.7228 |
| 0.4402 | 0.93 | 55000 | 0.4402 | 0.7229 |
| 0.4403 | 0.93 | 55200 | 0.4401 | 0.7229 |
| 0.4445 | 0.94 | 55400 | 0.4401 | 0.7229 |
| 0.4336 | 0.94 | 55600 | 0.4401 | 0.7228 |
| 0.431 | 0.94 | 55800 | 0.4401 | 0.7229 |
| 0.4343 | 0.95 | 56000 | 0.4400 | 0.7229 |
| 0.4298 | 0.95 | 56200 | 0.4400 | 0.7229 |
| 0.43 | 0.95 | 56400 | 0.4400 | 0.7229 |
| 0.4446 | 0.96 | 56600 | 0.4400 | 0.7229 |
| 0.4417 | 0.96 | 56800 | 0.4400 | 0.7229 |
| 0.4431 | 0.96 | 57000 | 0.4400 | 0.7229 |
| 0.4353 | 0.97 | 57200 | 0.4399 | 0.7229 |
| 0.4351 | 0.97 | 57400 | 0.4399 | 0.7229 |
| 0.4398 | 0.97 | 57600 | 0.4399 | 0.7229 |
| 0.4368 | 0.98 | 57800 | 0.4399 | 0.7229 |
| 0.4379 | 0.98 | 58000 | 0.4399 | 0.7229 |
| 0.4353 | 0.99 | 58200 | 0.4399 | 0.7229 |
| 0.4397 | 0.99 | 58400 | 0.4399 | 0.7229 |
| 0.4401 | 0.99 | 58600 | 0.4399 | 0.7229 |
| 0.4366 | 1.0 | 58800 | 0.4399 | 0.7229 |
| 0.434 | 1.0 | 59000 | 0.4399 | 0.7228 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
h-e-l-l-o/email-spam-classification-merged
|
h-e-l-l-o
| 2024-01-09T05:53:08Z | 120 | 3 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:legacy107/spamming-email-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-15T08:45:48Z |
---
datasets:
- legacy107/spamming-email-classification
language:
- en
metrics:
- accuracy
library_name: transformers
---
|
rajeshgautam/mistral7b-finetune-puffin-test
|
rajeshgautam
| 2024-01-09T05:48:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T05:21:25Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral7b-finetune-puffin-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7b-finetune-puffin-test
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Spanicin/Fulcrum_Achira
|
Spanicin
| 2024-01-09T05:48:14Z | 0 | 0 | null |
[
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-v0.1",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T05:48:13Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Fulcrum_Achira
Fulcrum_Achira is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
- model: OpenPipe/mistral-ft-optimized-1218
parameters:
density: 0.5
weight: 0.5
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Spanicin/Fulcrum_Achira"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
douy/parrot-tulu-2-dpo-70B-lora-cp54
|
douy
| 2024-01-09T05:46:08Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:allenai/tulu-2-dpo-70b",
"base_model:adapter:allenai/tulu-2-dpo-70b",
"region:us"
] | null | 2024-01-09T05:19:19Z |
---
library_name: peft
base_model: allenai/tulu-2-dpo-70b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Crystalcareai/PhiAlpaca2
|
Crystalcareai
| 2024-01-09T05:28:16Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"phi-msft",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T05:21:18Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: phi-sft-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: microsoft/phi-2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: tatsu-lab/alpaca
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./phi-sft-out
sequence_len: 2048
sample_packing: false # currently unsupported
pad_to_sequence_len:
adapter:
lora_model_dir:
lora_r: 16
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save:
- embd
- lm_head
wandb_project: Deepseek Wa
wandb_entity: lucasatkins81
wandb_watch:
wandb_name: Phi2 a6000 FT
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1.5
optimizer: paged_adamw_8bit
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
resize_token_embeddings_to_32x: true
special_tokens:
pad_token: "<|endoftext|>"
```
</details><br>
# phi-sft-out
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4382 | 0.0 | 1 | nan |
| 0.9139 | 0.25 | 12351 | nan |
| 0.016 | 0.5 | 24702 | nan |
| 0.0538 | 0.75 | 37053 | nan |
| 0.6701 | 1.0 | 49404 | nan |
| 0.0018 | 1.25 | 61755 | nan |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
douy/parrot-mistral-7B-lora-cp36-segmentation
|
douy
| 2024-01-09T05:28:04Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-09T05:12:33Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
DavideTHU/corgy_laptop_LoRA
|
DavideTHU
| 2024-01-09T05:04:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-09T05:02:56Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK laptop
license: openrail++
---
# SDXL LoRA DreamBooth - DavideTHU/corgy_laptop_LoRA
<Gallery />
## Model description
These are DavideTHU/corgy_laptop_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK laptop to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DavideTHU/corgy_laptop_LoRA/tree/main) them in the Files & versions tab.
|
DavideTHU/SDXL_LoRA_macbook
|
DavideTHU
| 2024-01-09T05:02:16Z | 12 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-09T04:26:32Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'photo of a <s0><s1> laptop'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a <s0><s1> laptop
license: openrail++
---
# SDXL LoRA DreamBooth - DavideTHU/SDXL_LoRA_macbook
<Gallery />
## Model description
### These are DavideTHU/SDXL_LoRA_macbook LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`SDXL_LoRA_macbook.safetensors` here 💾](/DavideTHU/SDXL_LoRA_macbook/blob/main/SDXL_LoRA_macbook.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:SDXL_LoRA_macbook:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`SDXL_LoRA_macbook_emb.safetensors` here 💾](/DavideTHU/SDXL_LoRA_macbook/blob/main/SDXL_LoRA_macbook_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `SDXL_LoRA_macbook_emb` to your prompt. For example, `photo of a SDXL_LoRA_macbook_emb laptop`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DavideTHU/SDXL_LoRA_macbook', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='DavideTHU/SDXL_LoRA_macbook', filename='SDXL_LoRA_macbook_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('photo of a <s0><s1> laptop').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/DavideTHU/SDXL_LoRA_macbook/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
stablediffusionapi/golo
|
stablediffusionapi
| 2024-01-09T04:52:23Z | 27 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-09T04:50:36Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Golo API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "golo"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/golo)
Model link: [View model](https://modelslab.com/models/golo)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "golo",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
mutemoon/whisper-tiny-zh-food
|
mutemoon
| 2024-01-09T04:41:58Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"zh",
"dataset:mutemoon/audio-about-food-2k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-08T14:38:27Z |
---
license: apache-2.0
datasets:
- mutemoon/audio-about-food-2k
language:
- zh
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
|
pnucamel/q-FrozenLake-v1-4x4-noSlippery
|
pnucamel
| 2024-01-09T04:35:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T04:35:47Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pnucamel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jinmang2/kpfbert
|
jinmang2
| 2024-01-09T04:35:00Z | 15,266 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-31T06:40:37Z |
# KpfBERT
https://github.com/jinmang2/kpfbert
|
Abhra-loony/english-to-spanish-lang-translation-model
|
Abhra-loony
| 2024-01-09T04:23:40Z | 57 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-08T14:42:09Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: Abhra-loony/english-to-spanish-lang-translation-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Abhra-loony/english-to-spanish-lang-translation-model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1779
- Validation Loss: 1.7509
- Train Bleu: 10.0073
- Train Gen Len: 15.7591
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:-----:|
| 2.8605 | 2.2201 | 3.6612 | 16.2025 | 0 |
| 2.4128 | 1.9346 | 6.7036 | 15.9377 | 1 |
| 2.1779 | 1.7509 | 10.0073 | 15.7591 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
stablediffusionapi/mixreal
|
stablediffusionapi
| 2024-01-09T04:22:18Z | 27 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-09T04:20:34Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# MixReal API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "mixreal"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/mixreal)
Model link: [View model](https://modelslab.com/models/mixreal)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "mixreal",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
ThanhNX/falcon_7b-FT
|
ThanhNX
| 2024-01-09T04:19:42Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null | 2024-01-09T04:17:51Z |
---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
gianlab/swin-tiny-patch4-window7-224-finetuned-parkinson-classification
|
gianlab
| 2024-01-09T04:09:38Z | 243 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-08T14:22:23Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-parkinson-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9090909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-parkinson-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4966
- Accuracy: 0.9091
## Model description
This model was created by importing the dataset of spiral drawings made by both parkinsons patients and healthy people into Google Colab from kaggle here: https://www.kaggle.com/datasets/kmader/parkinsons-drawings/data. I then used the image classification tutorial here: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb
obtaining the following notebook:
https://colab.research.google.com/drive/1oRjwgHjmaQYRU1qf-TTV7cg1qMZXgMaO?usp=sharing
The possible classified data are:
<ul>
<li>Healthy</li>
<li>Parkinson</li>
</ul>
### Spiral drawing example:

## Intended uses & limitations
Acknowledgements
The data came from the paper: Zham P, Kumar DK, Dabnichki P, Poosapadi Arjunan S and Raghav S (2017) Distinguishing Different Stages of Parkinson’s Disease Using Composite Index of Speed and Pen-Pressure of Sketching a Spiral. Front. Neurol. 8:435. doi: 10.3389/fneur.2017.00435
https://www.frontiersin.org/articles/10.3389/fneur.2017.00435/full
Data licence : https://creativecommons.org/licenses/by-nc-nd/4.0/
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6801 | 0.4545 |
| No log | 2.0 | 3 | 0.8005 | 0.3636 |
| No log | 3.0 | 5 | 0.6325 | 0.6364 |
| No log | 4.0 | 6 | 0.5494 | 0.8182 |
| No log | 5.0 | 7 | 0.5214 | 0.8182 |
| No log | 6.0 | 9 | 0.5735 | 0.7273 |
| 0.3063 | 7.0 | 11 | 0.4966 | 0.9091 |
| 0.3063 | 8.0 | 12 | 0.4557 | 0.9091 |
| 0.3063 | 9.0 | 13 | 0.4444 | 0.9091 |
| 0.3063 | 10.0 | 15 | 0.6226 | 0.6364 |
| 0.3063 | 11.0 | 17 | 0.8224 | 0.4545 |
| 0.3063 | 12.0 | 18 | 0.8127 | 0.4545 |
| 0.3063 | 13.0 | 19 | 0.7868 | 0.4545 |
| 0.2277 | 14.0 | 21 | 0.8195 | 0.4545 |
| 0.2277 | 15.0 | 23 | 0.7499 | 0.4545 |
| 0.2277 | 16.0 | 24 | 0.7022 | 0.5455 |
| 0.2277 | 17.0 | 25 | 0.6755 | 0.5455 |
| 0.2277 | 18.0 | 27 | 0.6277 | 0.6364 |
| 0.2277 | 19.0 | 29 | 0.5820 | 0.6364 |
| 0.1867 | 20.0 | 30 | 0.5784 | 0.6364 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ucheokechukwu/q-Taxi-v3
|
ucheokechukwu
| 2024-01-09T04:08:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T04:08:16Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ucheokechukwu/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ucheokechukwu/q-FrozenLake-v1-4x4-noSlippery
|
ucheokechukwu
| 2024-01-09T04:06:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T04:06:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ucheokechukwu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kichan05/Kaguya-Ai-Test
|
kichan05
| 2024-01-09T03:59:53Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:42dot/42dot_LLM-SFT-1.3B",
"base_model:adapter:42dot/42dot_LLM-SFT-1.3B",
"region:us"
] | null | 2024-01-09T01:34:12Z |
---
library_name: peft
base_model: 42dot/42dot_LLM-SFT-1.3B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
shitshow123/tinylamma-20000
|
shitshow123
| 2024-01-09T03:58:26Z | 1,598 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T03:54:08Z |
---
license: apache-2.0
---
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
|
houe5k2/distilbert-base-uncased-finetuned-imdb
|
houe5k2
| 2024-01-09T03:41:19Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-07T06:58:57Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7294 | 1.0 | 157 | 2.5370 |
| 2.542 | 2.0 | 314 | 2.4485 |
| 2.3915 | 3.0 | 471 | 2.4344 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.2
|
DavideTHU/SDXL_LoRA_necklace3
|
DavideTHU
| 2024-01-09T03:36:07Z | 16 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-09T02:45:18Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'photo of a wearable necklace of style <s0><s1>'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a wearable necklace of style <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - DavideTHU/SDXL_LoRA_necklace3
<Gallery />
## Model description
### These are DavideTHU/SDXL_LoRA_necklace3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`SDXL_LoRA_necklace3.safetensors` here 💾](/DavideTHU/SDXL_LoRA_necklace3/blob/main/SDXL_LoRA_necklace3.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:SDXL_LoRA_necklace3:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`SDXL_LoRA_necklace3_emb.safetensors` here 💾](/DavideTHU/SDXL_LoRA_necklace3/blob/main/SDXL_LoRA_necklace3_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `SDXL_LoRA_necklace3_emb` to your prompt. For example, `photo of a wearable necklace of style SDXL_LoRA_necklace3_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DavideTHU/SDXL_LoRA_necklace3', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='DavideTHU/SDXL_LoRA_necklace3', filename='SDXL_LoRA_necklace3_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('photo of a wearable necklace of style <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/DavideTHU/SDXL_LoRA_necklace3/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
helenblake13/first-baseline-1010-3060-2
|
helenblake13
| 2024-01-09T03:27:37Z | 2 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-09T03:23:22Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### first_baseline_1010_3060_2 Dreambooth model trained by helenblake13 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
alfalmi/gpt2-poetry-esp
|
alfalmi
| 2024-01-09T03:12:45Z | 88 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"es",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T02:31:37Z |
---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: gpt2-poetry-esp
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-poetry-esp
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
jth1911/bert-finetuned-ner
|
jth1911
| 2024-01-09T03:12:40Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-09T03:01:01Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0579
- Precision: 0.9326
- Recall: 0.9502
- F1: 0.9413
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2303 | 1.0 | 878 | 0.0691 | 0.9050 | 0.9315 | 0.9181 | 0.9806 |
| 0.0479 | 2.0 | 1756 | 0.0624 | 0.9282 | 0.9460 | 0.9370 | 0.9849 |
| 0.0268 | 3.0 | 2634 | 0.0579 | 0.9326 | 0.9502 | 0.9413 | 0.9862 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Vivek1234321/llama2-qlora-finetunined-french
|
Vivek1234321
| 2024-01-09T03:09:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-09T03:08:49Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
TinyPixel/pythia-exp
|
TinyPixel
| 2024-01-09T02:59:37Z | 12 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"region:us"
] | null | 2023-11-15T05:36:05Z |
---
library_name: peft
base_model: EleutherAI/pythia-1b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
im99/lcps
|
im99
| 2024-01-09T02:42:31Z | 0 | 0 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T02:31:50Z |
---
license: apache-2.0
language:
- en
---
Thie is the official weights for *LiDAR-Camera Panoptic Segmentation via Geometry-Consistent and Semantic-Aware Alignment* (ICCV 2023).
|
shivanandmn/phi-2-ultrafeedback_binarized
|
shivanandmn
| 2024-01-09T02:33:17Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-08T20:04:47Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-ultrafeedback_binarized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-ultrafeedback_binarized
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
vpepe2003/q-Taxi-v3
|
vpepe2003
| 2024-01-09T02:29:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T02:29:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vpepe2003/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vpepe2003/q-FrozenLake-v1-4x4-noSlippery
|
vpepe2003
| 2024-01-09T01:50:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T01:50:36Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vpepe2003/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
freshpearYoon/medium2
|
freshpearYoon
| 2024-01-09T01:49:22Z | 57 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-09T00:43:36Z |
---
language:
- ko
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper_medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the aihub dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6505
- Cer: 12.0457
- Wer: 29.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.6678 | 0.04 | 500 | 1.6505 | 12.0457 | 29.9853 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.15.0
- Tokenizers 0.15.0
|
urisoo/roberta-large-lora-token-classification
|
urisoo
| 2024-01-09T01:48:40Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:FacebookAI/roberta-large",
"base_model:adapter:FacebookAI/roberta-large",
"region:us"
] | null | 2024-01-09T01:48:36Z |
---
library_name: peft
base_model: roberta-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
uukuguy/speechless-mistral-moloras-7b
|
uukuguy
| 2024-01-09T01:43:21Z | 1,415 | 5 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"en",
"dataset:yahma/alpaca-cleaned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T09:25:26Z |
---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- yahma/alpaca-cleaned
license: apache-2.0
---
<p><h1> speechless-mistral-moloras-7b </h1></p>
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-mistral-moloras-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-mistral-moloras-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-mistral-moloras-7B-GGUF)
[4-bit GGUF models for CPU+GPU inference](https://huggingface.co/uukuguy/speechless-mistral-moloras-7b/tree/main/GGUF)
This model is the static version of moloras (Mixture-of-multi-LoRAs) based on the following 6 Mistral-based LoRa modules.
- Intel/neural-chat-7b-v3-1
- migtissera/SynthIA-7B-v1.3
- jondurbin/airoboros-m-7b-3.1.2
- bhenrym14/mistral-7b-platypus-fp16
- teknium/CollectiveCognition-v1.1-Mistral-7B
- uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
Totally 6 LoRA modules from [speechless-mistral-7b-dare-0.85](https://huggingface.co/speechlessai/speechless-mistral-7b-dare-0.85)
The router of mixture-of-multi-loras enables an automatic assembling of LoRA modules, using a gradientfree approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks.
Code: https://github.com/uukuguy/multi_loras?tab=readme-ov-file#mixture-of-multi-loras
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 59.98 |
| HellaSwag | 83.29 |
| MMLU | 64.12 |
| TruthfulQA | 42.15 |
| Winogrande | 78.37 |
| GSM8K | 37.68 |
| Average | 60.93 |
|
Buttsac/bible
|
Buttsac
| 2024-01-09T01:32:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-09T01:32:24Z |
from transformers import GPT2LMHeadModel, GPT2Tokenizer
def load_model():
model_name = "gpt2" # You can experiment with other GPT-2 variants or models
model = GPT2LMHeadModel.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
return model, tokenizer
def generate_response(prompt, model, tokenizer, max_length=100):
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate response
output = model.generate(input_ids, max_length=max_length, num_beams=5, no_repeat_ngram_size=2, top_k=50, top_p=0.95, temperature=0.7)
response = tokenizer.decode(output[0], skip_special_tokens=True)
return response
if __name__ == "__main__":
model, tokenizer = load_model()
while True:
user_input = input("You: ")
if user_input.lower() == 'exit':
break
response = generate_response(user_input, model, tokenizer)
print("Bot:", response)
|
wladimir/q-FrozenLake-v1-4x4-noSlippery
|
wladimir
| 2024-01-09T01:20:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-17T12:33:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="wladimir/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GAI-LLM/KoSOLAR-10.7B-mixed-v13
|
GAI-LLM
| 2024-01-09T01:17:38Z | 56 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T02:07:09Z |
---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/KoSOLAR-10.7B-mixed-v13**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/KoSOLAR-10.7B-mixed-v13 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/KoSOLAR-10.7B-mixed-v13
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/KoSOLAR-10.7B-mixed-v13"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
|
rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4
|
rikhoffbauer2
| 2024-01-09T01:07:40Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-09T01:07:35Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: a painting of seaweed underwater that look similar to green vines and leaves
in the style of <s0><s1><s2><s3>
output:
url: image-0.png
- text: a painting of seaweed painted using organic lines of green and black lines
on a black background in the style of <s0><s1><s2><s3>
output:
url: image-1.png
- text: an abstract painting (acrylic on canvas) of seaweed at the bottom of the ocean,
the painting also features a yellow border in the style of <s0><s1><s2><s3>
output:
url: image-2.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: in the style of <s0><s1><s2><s3>
license: openrail++
---
# SDXL LoRA DreamBooth - rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4
<Gallery />
## Model description
### These are rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`naomi-makkelie-seaweed-painting-style-4.safetensors` here 💾](/rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4/blob/main/naomi-makkelie-seaweed-painting-style-4.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:naomi-makkelie-seaweed-painting-style-4:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`naomi-makkelie-seaweed-painting-style-4_emb.safetensors` here 💾](/rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4/blob/main/naomi-makkelie-seaweed-painting-style-4_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `naomi-makkelie-seaweed-painting-style-4_emb` to your prompt. For example, `in the style of naomi-makkelie-seaweed-painting-style-4_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4', filename='naomi-makkelie-seaweed-painting-style-4_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>", "<s2>", "<s3>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>", "<s2>", "<s3>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('in the style of <s0><s1><s2><s3>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1><s2><s3>` in your prompt
## Details
All [Files & versions](/rikhoffbauer2/naomi-makkelie-seaweed-painting-style-4/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
youdiniplays/filipinolingo_model
|
youdiniplays
| 2024-01-09T01:07:39Z | 98 | 2 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:youdiniplays/filipinolingo_model",
"base_model:finetune:youdiniplays/filipinolingo_model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-27T08:30:24Z |
---
license: apache-2.0
base_model: youdiniplays/filipinolingo_model
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: filipinolingo_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# filipinolingo_model
This model is a fine-tuned version of [youdiniplays/filipinolingo_model](https://huggingface.co/youdiniplays/filipinolingo_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6597
- Bleu: 11.8044
- Gen Len: 14.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 4 | 2.6992 | 3.5276 | 13.75 |
| No log | 2.0 | 8 | 2.3483 | 6.8767 | 14.1875 |
| No log | 3.0 | 12 | 2.2289 | 8.4749 | 14.5625 |
| No log | 4.0 | 16 | 2.2552 | 8.537 | 14.375 |
| No log | 5.0 | 20 | 2.3404 | 9.3451 | 13.875 |
| No log | 6.0 | 24 | 2.5126 | 15.043 | 14.0625 |
| No log | 7.0 | 28 | 2.7072 | 14.9624 | 14.125 |
| No log | 8.0 | 32 | 2.8633 | 14.8092 | 14.3125 |
| No log | 9.0 | 36 | 2.9499 | 15.0385 | 14.125 |
| No log | 10.0 | 40 | 2.9954 | 9.0751 | 14.1875 |
| No log | 11.0 | 44 | 3.0306 | 8.321 | 14.125 |
| No log | 12.0 | 48 | 3.0640 | 8.5338 | 14.0625 |
| No log | 13.0 | 52 | 3.0869 | 8.5302 | 14.0625 |
| No log | 14.0 | 56 | 3.1138 | 8.3647 | 14.125 |
| No log | 15.0 | 60 | 3.1254 | 8.5765 | 13.9375 |
| No log | 16.0 | 64 | 3.1203 | 8.5302 | 14.0625 |
| No log | 17.0 | 68 | 3.1250 | 12.0182 | 14.1875 |
| No log | 18.0 | 72 | 3.1192 | 12.0182 | 14.1875 |
| No log | 19.0 | 76 | 3.1231 | 8.5338 | 14.1875 |
| No log | 20.0 | 80 | 3.1155 | 11.9388 | 13.875 |
| No log | 21.0 | 84 | 3.1176 | 11.9402 | 13.875 |
| No log | 22.0 | 88 | 3.1295 | 11.9402 | 13.875 |
| No log | 23.0 | 92 | 3.1487 | 11.9402 | 13.875 |
| No log | 24.0 | 96 | 3.1673 | 12.1489 | 13.875 |
| No log | 25.0 | 100 | 3.1859 | 16.2159 | 13.875 |
| No log | 26.0 | 104 | 3.2061 | 15.9711 | 13.8125 |
| No log | 27.0 | 108 | 3.2147 | 15.9711 | 13.8125 |
| No log | 28.0 | 112 | 3.2223 | 15.9711 | 13.8125 |
| No log | 29.0 | 116 | 3.2345 | 16.2159 | 13.8125 |
| No log | 30.0 | 120 | 3.2414 | 16.1289 | 13.8125 |
| No log | 31.0 | 124 | 3.2448 | 16.1261 | 13.8125 |
| No log | 32.0 | 128 | 3.2446 | 16.1261 | 13.8125 |
| No log | 33.0 | 132 | 3.2307 | 15.8836 | 13.75 |
| No log | 34.0 | 136 | 3.2247 | 15.8803 | 13.75 |
| No log | 35.0 | 140 | 3.2364 | 15.8803 | 13.75 |
| No log | 36.0 | 144 | 3.2507 | 16.1261 | 13.8125 |
| No log | 37.0 | 148 | 3.2608 | 16.1261 | 13.8125 |
| No log | 38.0 | 152 | 3.2893 | 16.536 | 13.8125 |
| No log | 39.0 | 156 | 3.3026 | 16.3582 | 13.8125 |
| No log | 40.0 | 160 | 3.2786 | 16.3582 | 13.9375 |
| No log | 41.0 | 164 | 3.2607 | 16.3548 | 14.0 |
| No log | 42.0 | 168 | 3.2557 | 16.4428 | 14.0 |
| No log | 43.0 | 172 | 3.2648 | 16.1734 | 14.1875 |
| No log | 44.0 | 176 | 3.2455 | 12.2013 | 14.375 |
| No log | 45.0 | 180 | 3.2444 | 12.2013 | 14.375 |
| No log | 46.0 | 184 | 3.2416 | 12.2013 | 14.375 |
| No log | 47.0 | 188 | 3.2412 | 11.8127 | 14.375 |
| No log | 48.0 | 192 | 3.2656 | 16.2611 | 14.3125 |
| No log | 49.0 | 196 | 3.2998 | 16.0785 | 15.1875 |
| No log | 50.0 | 200 | 3.3196 | 16.0785 | 14.6875 |
| No log | 51.0 | 204 | 3.3304 | 15.9095 | 15.0 |
| No log | 52.0 | 208 | 3.3312 | 16.0125 | 15.0 |
| No log | 53.0 | 212 | 3.3265 | 16.0956 | 14.5 |
| No log | 54.0 | 216 | 3.3282 | 16.2714 | 14.0625 |
| No log | 55.0 | 220 | 3.3316 | 16.2714 | 14.0625 |
| No log | 56.0 | 224 | 3.3312 | 16.2714 | 14.0625 |
| No log | 57.0 | 228 | 3.3262 | 15.8593 | 14.5 |
| No log | 58.0 | 232 | 3.3327 | 15.8672 | 14.5 |
| No log | 59.0 | 236 | 3.3157 | 15.6948 | 14.9375 |
| No log | 60.0 | 240 | 3.2849 | 15.8593 | 15.0 |
| No log | 61.0 | 244 | 3.2707 | 15.8593 | 15.0 |
| No log | 62.0 | 248 | 3.2732 | 15.8593 | 15.0625 |
| No log | 63.0 | 252 | 3.2781 | 18.4173 | 15.1875 |
| No log | 64.0 | 256 | 3.2990 | 18.6185 | 15.0 |
| No log | 65.0 | 260 | 3.3277 | 18.6185 | 14.9375 |
| No log | 66.0 | 264 | 3.3475 | 15.1975 | 14.8125 |
| No log | 67.0 | 268 | 3.3274 | 15.2762 | 14.6875 |
| No log | 68.0 | 272 | 3.3065 | 15.5165 | 14.75 |
| No log | 69.0 | 276 | 3.3111 | 18.6185 | 14.625 |
| No log | 70.0 | 280 | 3.3575 | 18.2583 | 14.6875 |
| No log | 71.0 | 284 | 3.4089 | 18.5319 | 14.875 |
| No log | 72.0 | 288 | 3.3937 | 18.6269 | 14.8125 |
| No log | 73.0 | 292 | 3.3043 | 18.6269 | 14.8125 |
| No log | 74.0 | 296 | 3.2596 | 18.7252 | 14.8125 |
| No log | 75.0 | 300 | 3.2515 | 12.9228 | 15.125 |
| No log | 76.0 | 304 | 3.2995 | 13.0338 | 15.125 |
| No log | 77.0 | 308 | 3.3457 | 12.7784 | 15.25 |
| No log | 78.0 | 312 | 3.3949 | 12.5078 | 15.375 |
| No log | 79.0 | 316 | 3.4148 | 12.5862 | 14.625 |
| No log | 80.0 | 320 | 3.4307 | 12.3785 | 14.75 |
| No log | 81.0 | 324 | 3.4095 | 11.6247 | 14.5 |
| No log | 82.0 | 328 | 3.3948 | 11.6247 | 14.5625 |
| No log | 83.0 | 332 | 3.3857 | 11.6247 | 14.4375 |
| No log | 84.0 | 336 | 3.3724 | 11.4452 | 13.875 |
| No log | 85.0 | 340 | 3.3688 | 11.4377 | 13.8125 |
| No log | 86.0 | 344 | 3.3656 | 11.4377 | 13.8125 |
| No log | 87.0 | 348 | 3.3839 | 11.4295 | 13.8125 |
| No log | 88.0 | 352 | 3.4168 | 11.1357 | 13.8125 |
| No log | 89.0 | 356 | 3.4694 | 11.1357 | 13.8125 |
| No log | 90.0 | 360 | 3.4992 | 10.5869 | 13.8125 |
| No log | 91.0 | 364 | 3.5087 | 10.5869 | 13.8125 |
| No log | 92.0 | 368 | 3.4923 | 11.0784 | 14.125 |
| No log | 93.0 | 372 | 3.4931 | 14.544 | 14.5 |
| No log | 94.0 | 376 | 3.5046 | 14.544 | 14.625 |
| No log | 95.0 | 380 | 3.5058 | 14.1526 | 14.375 |
| No log | 96.0 | 384 | 3.5057 | 13.9259 | 14.8125 |
| No log | 97.0 | 388 | 3.5107 | 13.9259 | 14.75 |
| No log | 98.0 | 392 | 3.5173 | 11.0784 | 14.25 |
| No log | 99.0 | 396 | 3.5231 | 11.0887 | 14.3125 |
| No log | 100.0 | 400 | 3.5289 | 11.2541 | 13.75 |
| No log | 101.0 | 404 | 3.5357 | 11.2541 | 13.75 |
| No log | 102.0 | 408 | 3.5417 | 11.1254 | 14.125 |
| No log | 103.0 | 412 | 3.5468 | 11.3608 | 14.25 |
| No log | 104.0 | 416 | 3.5430 | 11.3023 | 14.625 |
| No log | 105.0 | 420 | 3.5337 | 10.9245 | 14.875 |
| No log | 106.0 | 424 | 3.5247 | 10.9783 | 14.8125 |
| No log | 107.0 | 428 | 3.5199 | 10.9783 | 14.8125 |
| No log | 108.0 | 432 | 3.5172 | 10.9783 | 14.8125 |
| No log | 109.0 | 436 | 3.5164 | 11.3128 | 14.9375 |
| No log | 110.0 | 440 | 3.5167 | 11.3128 | 14.9375 |
| No log | 111.0 | 444 | 3.5178 | 11.3128 | 14.9375 |
| No log | 112.0 | 448 | 3.5201 | 11.3128 | 14.9375 |
| No log | 113.0 | 452 | 3.5232 | 11.5924 | 14.9375 |
| No log | 114.0 | 456 | 3.5264 | 11.5924 | 14.9375 |
| No log | 115.0 | 460 | 3.5210 | 11.5924 | 14.9375 |
| No log | 116.0 | 464 | 3.5163 | 11.3128 | 14.6875 |
| No log | 117.0 | 468 | 3.5180 | 11.3706 | 14.625 |
| No log | 118.0 | 472 | 3.5237 | 11.3706 | 14.625 |
| No log | 119.0 | 476 | 3.5285 | 11.6792 | 14.875 |
| No log | 120.0 | 480 | 3.5299 | 11.9509 | 14.875 |
| No log | 121.0 | 484 | 3.5301 | 11.9509 | 14.875 |
| No log | 122.0 | 488 | 3.5318 | 11.9509 | 14.875 |
| No log | 123.0 | 492 | 3.5342 | 11.9509 | 14.875 |
| No log | 124.0 | 496 | 3.5355 | 11.9509 | 14.875 |
| 0.0683 | 125.0 | 500 | 3.5385 | 11.9509 | 14.6875 |
| 0.0683 | 126.0 | 504 | 3.5422 | 11.9509 | 14.6875 |
| 0.0683 | 127.0 | 508 | 3.5454 | 11.9509 | 14.6875 |
| 0.0683 | 128.0 | 512 | 3.5490 | 11.9509 | 14.875 |
| 0.0683 | 129.0 | 516 | 3.5494 | 11.9509 | 14.6875 |
| 0.0683 | 130.0 | 520 | 3.5500 | 11.9509 | 14.6875 |
| 0.0683 | 131.0 | 524 | 3.5513 | 11.6107 | 14.6875 |
| 0.0683 | 132.0 | 528 | 3.5545 | 11.8824 | 14.6875 |
| 0.0683 | 133.0 | 532 | 3.5571 | 11.8202 | 14.6875 |
| 0.0683 | 134.0 | 536 | 3.5597 | 11.8202 | 14.875 |
| 0.0683 | 135.0 | 540 | 3.5611 | 11.8824 | 14.5625 |
| 0.0683 | 136.0 | 544 | 3.5629 | 11.8824 | 14.5625 |
| 0.0683 | 137.0 | 548 | 3.5666 | 11.8824 | 14.5625 |
| 0.0683 | 138.0 | 552 | 3.5715 | 11.8824 | 14.5625 |
| 0.0683 | 139.0 | 556 | 3.5762 | 11.8824 | 14.5625 |
| 0.0683 | 140.0 | 560 | 3.5789 | 11.8824 | 14.5625 |
| 0.0683 | 141.0 | 564 | 3.5807 | 11.8824 | 14.5625 |
| 0.0683 | 142.0 | 568 | 3.5858 | 11.8824 | 14.5625 |
| 0.0683 | 143.0 | 572 | 3.5902 | 11.8202 | 14.875 |
| 0.0683 | 144.0 | 576 | 3.5886 | 11.5499 | 14.875 |
| 0.0683 | 145.0 | 580 | 3.5877 | 11.5499 | 14.875 |
| 0.0683 | 146.0 | 584 | 3.5866 | 11.6107 | 14.875 |
| 0.0683 | 147.0 | 588 | 3.5875 | 11.6107 | 14.875 |
| 0.0683 | 148.0 | 592 | 3.5892 | 11.6107 | 14.875 |
| 0.0683 | 149.0 | 596 | 3.5951 | 11.6792 | 14.875 |
| 0.0683 | 150.0 | 600 | 3.6008 | 11.6792 | 14.875 |
| 0.0683 | 151.0 | 604 | 3.6067 | 11.6792 | 14.875 |
| 0.0683 | 152.0 | 608 | 3.5964 | 11.6107 | 14.875 |
| 0.0683 | 153.0 | 612 | 3.5930 | 11.6107 | 14.875 |
| 0.0683 | 154.0 | 616 | 3.5945 | 11.5499 | 15.125 |
| 0.0683 | 155.0 | 620 | 3.5948 | 11.5499 | 15.125 |
| 0.0683 | 156.0 | 624 | 3.5953 | 11.6107 | 14.875 |
| 0.0683 | 157.0 | 628 | 3.5990 | 11.6107 | 14.875 |
| 0.0683 | 158.0 | 632 | 3.6028 | 11.6107 | 14.875 |
| 0.0683 | 159.0 | 636 | 3.6059 | 11.6026 | 14.875 |
| 0.0683 | 160.0 | 640 | 3.6090 | 11.6026 | 14.875 |
| 0.0683 | 161.0 | 644 | 3.6104 | 11.6026 | 14.875 |
| 0.0683 | 162.0 | 648 | 3.6114 | 11.6026 | 14.875 |
| 0.0683 | 163.0 | 652 | 3.6129 | 11.6026 | 14.875 |
| 0.0683 | 164.0 | 656 | 3.6135 | 11.6026 | 14.875 |
| 0.0683 | 165.0 | 660 | 3.6145 | 11.6026 | 14.875 |
| 0.0683 | 166.0 | 664 | 3.6152 | 11.6026 | 14.875 |
| 0.0683 | 167.0 | 668 | 3.6175 | 11.6026 | 14.875 |
| 0.0683 | 168.0 | 672 | 3.6140 | 11.6026 | 14.875 |
| 0.0683 | 169.0 | 676 | 3.6140 | 11.6026 | 14.875 |
| 0.0683 | 170.0 | 680 | 3.6159 | 11.3715 | 14.875 |
| 0.0683 | 171.0 | 684 | 3.6162 | 11.3715 | 14.875 |
| 0.0683 | 172.0 | 688 | 3.6174 | 11.3715 | 14.875 |
| 0.0683 | 173.0 | 692 | 3.6192 | 11.3715 | 14.875 |
| 0.0683 | 174.0 | 696 | 3.6209 | 11.3715 | 14.875 |
| 0.0683 | 175.0 | 700 | 3.6219 | 11.3715 | 14.875 |
| 0.0683 | 176.0 | 704 | 3.6239 | 11.3715 | 14.875 |
| 0.0683 | 177.0 | 708 | 3.6266 | 11.3715 | 14.875 |
| 0.0683 | 178.0 | 712 | 3.6308 | 11.3715 | 14.875 |
| 0.0683 | 179.0 | 716 | 3.6316 | 11.3715 | 14.875 |
| 0.0683 | 180.0 | 720 | 3.6321 | 11.6026 | 14.875 |
| 0.0683 | 181.0 | 724 | 3.6322 | 11.6026 | 14.875 |
| 0.0683 | 182.0 | 728 | 3.6319 | 11.8757 | 14.875 |
| 0.0683 | 183.0 | 732 | 3.6319 | 11.6577 | 14.875 |
| 0.0683 | 184.0 | 736 | 3.6293 | 11.8757 | 14.875 |
| 0.0683 | 185.0 | 740 | 3.6229 | 11.8757 | 14.875 |
| 0.0683 | 186.0 | 744 | 3.6186 | 11.8757 | 14.875 |
| 0.0683 | 187.0 | 748 | 3.6166 | 11.8757 | 14.875 |
| 0.0683 | 188.0 | 752 | 3.6165 | 11.8757 | 14.875 |
| 0.0683 | 189.0 | 756 | 3.6193 | 11.8757 | 14.875 |
| 0.0683 | 190.0 | 760 | 3.6216 | 11.8757 | 14.875 |
| 0.0683 | 191.0 | 764 | 3.6239 | 11.8757 | 14.875 |
| 0.0683 | 192.0 | 768 | 3.6265 | 11.8757 | 14.875 |
| 0.0683 | 193.0 | 772 | 3.6284 | 11.8757 | 14.875 |
| 0.0683 | 194.0 | 776 | 3.6301 | 11.8684 | 14.8125 |
| 0.0683 | 195.0 | 780 | 3.6319 | 11.8684 | 14.8125 |
| 0.0683 | 196.0 | 784 | 3.6341 | 11.8684 | 14.8125 |
| 0.0683 | 197.0 | 788 | 3.6364 | 11.8684 | 14.8125 |
| 0.0683 | 198.0 | 792 | 3.6386 | 11.8684 | 14.8125 |
| 0.0683 | 199.0 | 796 | 3.6418 | 11.8757 | 14.8125 |
| 0.0683 | 200.0 | 800 | 3.6447 | 11.8757 | 14.8125 |
| 0.0683 | 201.0 | 804 | 3.6463 | 12.1401 | 14.8125 |
| 0.0683 | 202.0 | 808 | 3.6476 | 12.1401 | 14.8125 |
| 0.0683 | 203.0 | 812 | 3.6496 | 11.9402 | 14.5625 |
| 0.0683 | 204.0 | 816 | 3.6518 | 12.0061 | 14.1875 |
| 0.0683 | 205.0 | 820 | 3.6544 | 12.0061 | 14.1875 |
| 0.0683 | 206.0 | 824 | 3.6561 | 12.0061 | 14.1875 |
| 0.0683 | 207.0 | 828 | 3.6574 | 12.206 | 14.3125 |
| 0.0683 | 208.0 | 832 | 3.6588 | 12.1401 | 14.6875 |
| 0.0683 | 209.0 | 836 | 3.6603 | 12.1401 | 14.6875 |
| 0.0683 | 210.0 | 840 | 3.6612 | 12.1401 | 14.6875 |
| 0.0683 | 211.0 | 844 | 3.6620 | 12.1401 | 14.6875 |
| 0.0683 | 212.0 | 848 | 3.6628 | 12.1401 | 14.6875 |
| 0.0683 | 213.0 | 852 | 3.6628 | 12.1401 | 14.6875 |
| 0.0683 | 214.0 | 856 | 3.6633 | 11.8757 | 14.6875 |
| 0.0683 | 215.0 | 860 | 3.6648 | 11.8757 | 14.6875 |
| 0.0683 | 216.0 | 864 | 3.6665 | 11.8757 | 14.6875 |
| 0.0683 | 217.0 | 868 | 3.6678 | 11.8044 | 14.75 |
| 0.0683 | 218.0 | 872 | 3.6690 | 11.8044 | 14.75 |
| 0.0683 | 219.0 | 876 | 3.6699 | 11.8044 | 14.75 |
| 0.0683 | 220.0 | 880 | 3.6693 | 11.8044 | 14.75 |
| 0.0683 | 221.0 | 884 | 3.6689 | 11.8757 | 14.6875 |
| 0.0683 | 222.0 | 888 | 3.6687 | 11.8757 | 14.8125 |
| 0.0683 | 223.0 | 892 | 3.6687 | 11.8757 | 14.8125 |
| 0.0683 | 224.0 | 896 | 3.6690 | 11.8757 | 14.8125 |
| 0.0683 | 225.0 | 900 | 3.6662 | 11.8757 | 14.8125 |
| 0.0683 | 226.0 | 904 | 3.6609 | 11.8757 | 14.8125 |
| 0.0683 | 227.0 | 908 | 3.6561 | 11.8757 | 14.8125 |
| 0.0683 | 228.0 | 912 | 3.6536 | 11.8757 | 14.8125 |
| 0.0683 | 229.0 | 916 | 3.6522 | 11.8757 | 14.8125 |
| 0.0683 | 230.0 | 920 | 3.6515 | 11.8757 | 14.8125 |
| 0.0683 | 231.0 | 924 | 3.6526 | 11.8757 | 14.8125 |
| 0.0683 | 232.0 | 928 | 3.6532 | 11.8757 | 14.8125 |
| 0.0683 | 233.0 | 932 | 3.6537 | 11.8757 | 14.8125 |
| 0.0683 | 234.0 | 936 | 3.6536 | 11.8757 | 14.8125 |
| 0.0683 | 235.0 | 940 | 3.6540 | 11.8757 | 14.8125 |
| 0.0683 | 236.0 | 944 | 3.6540 | 11.8757 | 14.8125 |
| 0.0683 | 237.0 | 948 | 3.6540 | 11.8757 | 14.8125 |
| 0.0683 | 238.0 | 952 | 3.6545 | 11.8757 | 14.8125 |
| 0.0683 | 239.0 | 956 | 3.6553 | 11.8757 | 14.8125 |
| 0.0683 | 240.0 | 960 | 3.6557 | 11.8757 | 14.8125 |
| 0.0683 | 241.0 | 964 | 3.6563 | 11.8757 | 14.8125 |
| 0.0683 | 242.0 | 968 | 3.6573 | 11.8757 | 14.8125 |
| 0.0683 | 243.0 | 972 | 3.6579 | 11.8757 | 14.8125 |
| 0.0683 | 244.0 | 976 | 3.6583 | 11.8757 | 14.8125 |
| 0.0683 | 245.0 | 980 | 3.6594 | 11.8757 | 14.8125 |
| 0.0683 | 246.0 | 984 | 3.6599 | 11.8757 | 14.8125 |
| 0.0683 | 247.0 | 988 | 3.6606 | 11.8757 | 14.8125 |
| 0.0683 | 248.0 | 992 | 3.6513 | 11.8757 | 14.8125 |
| 0.0683 | 249.0 | 996 | 3.6454 | 11.8757 | 14.8125 |
| 0.0005 | 250.0 | 1000 | 3.6429 | 11.8757 | 14.8125 |
| 0.0005 | 251.0 | 1004 | 3.6415 | 11.8757 | 14.8125 |
| 0.0005 | 252.0 | 1008 | 3.6403 | 11.8757 | 14.8125 |
| 0.0005 | 253.0 | 1012 | 3.6400 | 11.8757 | 14.8125 |
| 0.0005 | 254.0 | 1016 | 3.6410 | 11.8757 | 14.8125 |
| 0.0005 | 255.0 | 1020 | 3.6418 | 11.8757 | 14.8125 |
| 0.0005 | 256.0 | 1024 | 3.6430 | 11.8044 | 14.75 |
| 0.0005 | 257.0 | 1028 | 3.6441 | 11.8044 | 14.75 |
| 0.0005 | 258.0 | 1032 | 3.6455 | 11.8044 | 14.75 |
| 0.0005 | 259.0 | 1036 | 3.6463 | 11.8044 | 14.75 |
| 0.0005 | 260.0 | 1040 | 3.6471 | 11.8044 | 14.75 |
| 0.0005 | 261.0 | 1044 | 3.6478 | 11.8044 | 14.75 |
| 0.0005 | 262.0 | 1048 | 3.6487 | 11.8044 | 14.75 |
| 0.0005 | 263.0 | 1052 | 3.6499 | 11.8044 | 14.75 |
| 0.0005 | 264.0 | 1056 | 3.6509 | 11.8044 | 14.75 |
| 0.0005 | 265.0 | 1060 | 3.6516 | 11.8044 | 14.75 |
| 0.0005 | 266.0 | 1064 | 3.6518 | 11.8044 | 14.75 |
| 0.0005 | 267.0 | 1068 | 3.6522 | 11.8044 | 14.75 |
| 0.0005 | 268.0 | 1072 | 3.6524 | 11.8044 | 14.75 |
| 0.0005 | 269.0 | 1076 | 3.6533 | 11.8044 | 14.75 |
| 0.0005 | 270.0 | 1080 | 3.6535 | 11.8044 | 14.75 |
| 0.0005 | 271.0 | 1084 | 3.6543 | 11.8044 | 14.75 |
| 0.0005 | 272.0 | 1088 | 3.6551 | 11.8044 | 14.75 |
| 0.0005 | 273.0 | 1092 | 3.6554 | 11.8044 | 14.75 |
| 0.0005 | 274.0 | 1096 | 3.6559 | 11.8044 | 14.75 |
| 0.0005 | 275.0 | 1100 | 3.6558 | 11.8044 | 14.75 |
| 0.0005 | 276.0 | 1104 | 3.6563 | 11.8044 | 14.75 |
| 0.0005 | 277.0 | 1108 | 3.6567 | 11.8044 | 14.75 |
| 0.0005 | 278.0 | 1112 | 3.6568 | 11.8044 | 14.75 |
| 0.0005 | 279.0 | 1116 | 3.6570 | 11.8044 | 14.75 |
| 0.0005 | 280.0 | 1120 | 3.6573 | 11.8044 | 14.75 |
| 0.0005 | 281.0 | 1124 | 3.6575 | 11.8044 | 14.75 |
| 0.0005 | 282.0 | 1128 | 3.6575 | 11.8044 | 14.75 |
| 0.0005 | 283.0 | 1132 | 3.6574 | 11.8044 | 14.75 |
| 0.0005 | 284.0 | 1136 | 3.6574 | 11.8044 | 14.75 |
| 0.0005 | 285.0 | 1140 | 3.6580 | 11.8044 | 14.75 |
| 0.0005 | 286.0 | 1144 | 3.6579 | 11.8044 | 14.75 |
| 0.0005 | 287.0 | 1148 | 3.6583 | 11.8044 | 14.75 |
| 0.0005 | 288.0 | 1152 | 3.6583 | 11.8044 | 14.75 |
| 0.0005 | 289.0 | 1156 | 3.6589 | 11.8044 | 14.75 |
| 0.0005 | 290.0 | 1160 | 3.6588 | 11.8044 | 14.75 |
| 0.0005 | 291.0 | 1164 | 3.6587 | 11.8044 | 14.75 |
| 0.0005 | 292.0 | 1168 | 3.6588 | 11.8044 | 14.75 |
| 0.0005 | 293.0 | 1172 | 3.6592 | 11.8044 | 14.75 |
| 0.0005 | 294.0 | 1176 | 3.6590 | 11.8044 | 14.75 |
| 0.0005 | 295.0 | 1180 | 3.6592 | 11.8044 | 14.75 |
| 0.0005 | 296.0 | 1184 | 3.6593 | 11.8044 | 14.75 |
| 0.0005 | 297.0 | 1188 | 3.6593 | 11.8044 | 14.75 |
| 0.0005 | 298.0 | 1192 | 3.6598 | 11.8044 | 14.75 |
| 0.0005 | 299.0 | 1196 | 3.6597 | 11.8044 | 14.75 |
| 0.0005 | 300.0 | 1200 | 3.6597 | 11.8044 | 14.75 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
qkrwnstj/lora_pixel_journal2
|
qkrwnstj
| 2024-01-09T01:05:22Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-07T17:18:39Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - qkrwnstj/lora_pixel_journal2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the qkrwnstj/pixel-journal dataset. You can find some example images in the following.




|
Grffffffy/apache-3.9
|
Grffffffy
| 2024-01-09T01:02:54Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-01-09T00:56:26Z |
---
license: apache-2.0
---
Minha historia começa assim uma garota sonhadora que frequenta mesma escola que o seu amor platônico, Desde o primeiro dia em que eu o vi, fiquei encantada com a sua gentileza, inteligência e charme. Ele é popular na escola, enquanto eu prefiro ficar nos bastidores, observando-o de longe
Queria poder demostrar meus sentimentos mais
o medo da rejeição e a insegurança me impedem de dar esse passo tão importante.
Eu me contento em admirá-lo discretamente, acompanhando-o nos corredores da escola, observando suas aulas de longe e até mesmo frequentando eventos nos quais sabia que ele estaria presente, apenas para sentir sua presença.
Os dias passavam, e meus sentimentos cresciam cada vez mais. Eu sei quase tudo quase sobre ele, desde seus gostos até seus sonhos e aspirações. No entanto, o simples pensamento de revelar meus sentimentos me amedronta profundamente.
As vezes me pergunto será que ele me nota? A confusão e as emoções se acumulam, criando um turbilhão dentro de mim
Eu decidi que no dia do evento escolar vou superar meus medos e falar com ele
quando finalmente eu me aproximei para confessar meus sentimentos, as palavras fugiram da minha boca. O medo e a timidez me paralisaram.
Ele surpreso olhava minha agitação e nervosismo e gentilmente sorriu e iniciou uma conversa. Nesse momento, eu percebi que talvez a conexão que eu imaginava existir não fosse apenas em minha mente.
Embora eu não tenha conseguido me declarar naquele momento, a amizade que começou a florescer entre nos trouxe um novo tipo de felicidade para mim
A história de Sofia e seu amor platônico ensina que, por vezes, a coragem de se aproximar pode trazer recompensas inesperadas. A vida é cheia de surpresas, e, mesmo que nem sempre as coisas aconteçam como planejado, a jornada vale a pena.
|
yaocl/whisper-small-hi
|
yaocl
| 2024-01-09T00:55:23Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-08T06:18:17Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4281
- Wer: 34.2504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0822 | 2.44 | 1000 | 0.2963 | 35.2874 |
| 0.0219 | 4.89 | 2000 | 0.3452 | 34.0642 |
| 0.0011 | 7.33 | 3000 | 0.4070 | 34.4493 |
| 0.0005 | 9.78 | 4000 | 0.4281 | 34.2504 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jeiku/Streamlined_3B_GGUF
|
jeiku
| 2024-01-09T00:52:47Z | 22 | 1 | null |
[
"gguf",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/No_Robots_Alpaca_StableLM",
"base_model:merge:jeiku/No_Robots_Alpaca_StableLM",
"base_model:jeiku/Rosa_v1_3B",
"base_model:merge:jeiku/Rosa_v1_3B",
"base_model:jeiku/Toxic_DPO_StableLM",
"base_model:merge:jeiku/Toxic_DPO_StableLM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-08T22:49:10Z |
---
base_model:
- jeiku/Rosa_v1_3B
- jeiku/Erotica_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Toxic_DPO_StableLM
- jeiku/Rosa_v1_3B
- jeiku/alpaca-cleaned_StableLM
- jeiku/Rosa_v1_3B
- jeiku/Gnosis_StableLM
- jeiku/Rosa_v1_3B
- jeiku/No_Robots_Alpaca_StableLM
- jeiku/Rosa_v1_3B
- jeiku/smol_PIPPA_StableLM
- jeiku/Rosa_v1_3B
tags:
- mergekit
- merge
---
# output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Erotica_StableLM](https://huggingface.co/jeiku/Erotica_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Toxic_DPO_StableLM](https://huggingface.co/jeiku/Toxic_DPO_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/alpaca-cleaned_StableLM](https://huggingface.co/jeiku/alpaca-cleaned_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/No_Robots_Alpaca_StableLM](https://huggingface.co/jeiku/No_Robots_Alpaca_StableLM)
* [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/smol_PIPPA_StableLM](https://huggingface.co/jeiku/smol_PIPPA_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/Rosa_v1_3B+jeiku/No_Robots_Alpaca_StableLM
parameters:
weight: 0.15
density: 0.166
- model: jeiku/Rosa_v1_3B+jeiku/Gnosis_StableLM
parameters:
weight: 0.2
density: 0.166
- model: jeiku/Rosa_v1_3B+jeiku/Erotica_StableLM
parameters:
weight: 0.15
density: 0.166
- model: jeiku/Rosa_v1_3B+jeiku/smol_PIPPA_StableLM
parameters:
weight: 0.2
density: 0.166
- model: jeiku/Rosa_v1_3B+jeiku/alpaca-cleaned_StableLM
parameters:
weight: 0.1
density: 0.166
- model: jeiku/Rosa_v1_3B+jeiku/Toxic_DPO_StableLM
parameters:
weight: 0.2
density: 0.166
merge_method: dare_ties
base_model: jeiku/Rosa_v1_3B
parameters:
dtype: bfloat16
```
|
modpotato/public_models
|
modpotato
| 2024-01-09T00:48:06Z | 0 | 0 | null |
[
"rvc",
"audio-to-audio",
"en",
"region:us"
] |
audio-to-audio
| 2023-10-06T04:01:41Z |
---
language:
- en
pipeline_tag: audio-to-audio
tags:
- rvc
---
# mods rvc models
repo for rvc models ive made (dm me on discord (modpotato) for commisions)
[Open an issue](https://huggingface.co/Gustavosta/SowlfieModelsRVC/discussions/new)!
## 🎤 New RVC Models:
(all of these are trained until no improvement noticed)
| Model | Epochs | Language | Preview |
|---|:---:|---:|---|
| [Androxus (Paladins)](https://huggingface.co/modpotato/public_models/blob/main/adnorox_fittest.zip) | 123 epochs) | english | [Androxus from Paladins - Billion Dollar Baby](https://www.youtube.com/watch?v=BrOO9AQySPk) |
| [a literal fucking sine wave](https://huggingface.co/modpotato/public_models/blob/main/a%20literal%20sine%20wave_fittest.zip) | 197 epochs | ????? | [games but its sung by a literal sine wave](https://youtu.be/-omYMgHoyRA) |
| [square wave](https://huggingface.co/modpotato/public_models/blob/main/square%20wave.zip) | 42 epochs (may retrain) | ????? | [games but its sung by a literal square wave](https://www.youtube.com/watch?v=nqpvXi_Vpls) |
| [saw wave](https://huggingface.co/modpotato/public_models/blob/main/square%20wave.zip) | 774 epochs | ????? | [games but its sung by a literal saw wave](https://www.youtube.com/watch?v=-iQVvLWSUg0) |
| [Nightbringer Yasuo (LoL)](https://huggingface.co/modpotato/public_models/blob/main/nightbringer%20yasuo.zip) | 370 epochs | english | [i want it that way sung by Nightbringer Yasuo (LoL)](https://www.youtube.com/watch?v=I3qT4StTXI0) |
| [triangle wave](https://huggingface.co/modpotato/public_models/blob/main/triangle%20wave_fittest.zip) | around 350 | ????? | [games but its sung by a literal triangle wave](https://www.youtube.com/watch?v=Ry2OBYCcJO8) |
| [Corvus (Paladins)](https://huggingface.co/modpotato/public_models/blob/main/corvus_fittest.zip) | around 350 | english | [corvus sings ballin](https://youtu.be/RxiqERTi9LU) |
| [Otzdarva (Youtuber)](https://huggingface.co/modpotato/public_models/blob/main/otzdarva_fittest.zip) | no idea | english | [otz sings 3 big balls](https://youtu.be/5kQoVrTDFuA) |
| [DJ Smokey (fixed)](https://huggingface.co/modpotato/public_models/blob/main/dj%20smokey_v2.zip) | no idea | english | [DJ Smokey - ryte night](https://www.youtube.com/watch?v=VNfBj6P2-Fw) |
| [Mordekaiser (LoL)](https://huggingface.co/modpotato/public_models/blob/main/mordekaiser.zip) | no idea | english | none atm |
| [Sydney (Payday 2)](https://huggingface.co/modpotato/public_models/blob/main/sydney_(payday_2)_fittest.zip) | no idea | english | none atm |
| [Jiro (Payday 2)](https://huggingface.co/modpotato/public_models/blob/main/jiro_payday_2_fittest.zip) | no idea | japanese | none atm |
| [car names meme guy](https://huggingface.co/modpotato/public_models/blob/main/car%20names%20guy_fittest.zip) | no idea | english | none atm |
| [Nihilanth](https://huggingface.co/modpotato/public_models/blob/main/Nihilanth_fittest.zip) | no idea | ????? | none atm |
| [OOF sfx](https://huggingface.co/modpotato/public_models/blob/main/oof_sfx_fittest.zip) | no idea | oof | none atm |
| [jeff (half life 2)](https://huggingface.co/modpotato/public_models/blob/main/HL-jeff_fittest.zip) | no idea | ????? | none atm |
| [Slade (Teen Titans)](https://huggingface.co/modpotato/public_models/blob/main/slade_teen-titans.zip) | no idea | ~250 | none atm |
| [metal pipe sfx](https://huggingface.co/modpotato/public_models/blob/main/metal_pipe_fittest.zip) | no idea | ~250 | none atm |
| [NTTS](https://huggingface.co/modpotato/public_models/blob/main/NTTS_mini_fittest.zip) | no idea | ????? | none atm |
| [Bedman / Romeo -ENG- (Guilty Gear Xrd)](https://huggingface.co/modpotato/public_models/blob/main/badman_fittest.zip) | no idea | english | none atm |
| [Captain Price (MW2)](https://huggingface.co/modpotato/public_models/blob/main/price_mw2_fittest.zip) | no idea | english | none atm |
| [Papyrus (If Undertale was Realistic)](https://huggingface.co/modpotato/public_models/blob/main/Papyrus_realisticundertale_fittest.zip) | no idea | english | none atm |
| [Pramanix (Arknights)](https://huggingface.co/modpotato/public_models/blob/main/pramanix_fittest.zip) | no idea | english | none atm |
| [Exusiai (Arknights)](https://huggingface.co/modpotato/public_models/blob/main/Exusiai_arknights_301.zip) | like 300 sumn | english | none atm |
| [Silverash (Arknights)](https://huggingface.co/modpotato/public_models/blob/main/Silverash_arknights_373.zip) | like 300 sumn | english | none atm |
| [Texas (Arknights)](https://huggingface.co/modpotato/public_models/blob/main/texas_arknights_270.zip) | like 300 sumn | english | none atm |
## 🤢 Old RVC Models:
| Model | Epochs | Language | Preview |
|---|:---:|---:|---|
| [DJ Smokey (legalize nuclear bombs)](https://huggingface.co/modpotato/public_models/blob/main/test-dj-smokey.zip) | 1k epochs | english | [DJ Smokey - ryte night](https://youtu.be/VNfBj6P2-Fw) |
| [ChaCha (Akazukin Chacha)](https://huggingface.co/modpotato/public_models/blob/main/chacha.zip) | 300 epochs | english dub | [ChaCha - ryte night](https://youtu.be/wRIIleSQX94) |
| [Link (CD-i)](https://huggingface.co/modpotato/public_models/blob/main/Link%20(CD-i).zip) | 300 epochs | english | [link miss me with that nonsense (actually sung by link)](https://youtu.be/uBaj0kpFKf8) |
yeah i ripped this from some other huggingface acc
|
ucheokechukwu/ppo-Huggy
|
ucheokechukwu
| 2024-01-09T00:43:09Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-09T00:42:57Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ucheokechukwu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bizarre123/standardized-app
|
bizarre123
| 2024-01-09T00:41:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-01-09T00:38:04Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
gagan3012/MetaModel_moe_multilingualv2
|
gagan3012
| 2024-01-09T00:35:51Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"mergekit",
"merge",
"chinese",
"arabic",
"english",
"multilingual",
"german",
"french",
"openchat/openchat-3.5-1210",
"beowolx/CodeNinja-1.0-OpenChat-7B",
"maywell/PiVoT-0.1-Starling-LM-RP",
"WizardLM/WizardMath-7B-V1.1",
"davidkim205/komt-mistral-7b-v1",
"OpenBuddy/openbuddy-zephyr-7b-v14.1",
"manishiitg/open-aditi-hi-v1",
"VAGOsolutions/SauerkrautLM-7b-v1-mistral",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T18:01:20Z |
---
license: apache-2.0
tags:
- moe
- mergekit
- merge
- chinese
- arabic
- english
- multilingual
- german
- french
- openchat/openchat-3.5-1210
- beowolx/CodeNinja-1.0-OpenChat-7B
- maywell/PiVoT-0.1-Starling-LM-RP
- WizardLM/WizardMath-7B-V1.1
- davidkim205/komt-mistral-7b-v1
- OpenBuddy/openbuddy-zephyr-7b-v14.1
- manishiitg/open-aditi-hi-v1
- VAGOsolutions/SauerkrautLM-7b-v1-mistral
---
# MetaModel_moe_multilingualv2
This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1)
* [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1)
* [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1)
* [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
## 🧩 Configuration
```yamlbase_model: mlabonne/NeuralMarcoro14-7B
dtype: bfloat16
experts:
- positive_prompts:
- chat
- assistant
- tell me
- explain
source_model: openchat/openchat-3.5-1210
- positive_prompts:
- code
- python
- javascript
- programming
- algorithm
source_model: beowolx/CodeNinja-1.0-OpenChat-7B
- positive_prompts:
- storywriting
- write
- scene
- story
- character
source_model: maywell/PiVoT-0.1-Starling-LM-RP
- positive_prompts:
- reason
- math
- mathematics
- solve
- count
source_model: WizardLM/WizardMath-7B-V1.1
- positive_prompts:
- korean
- answer in korean
- korea
source_model: davidkim205/komt-mistral-7b-v1
- positive_prompts:
- chinese
- china
- answer in chinese
source_model: OpenBuddy/openbuddy-zephyr-7b-v14.1
- positive_prompts:
- hindi
- india
- hindu
- answer in hindi
source_model: manishiitg/open-aditi-hi-v1
- positive_prompts:
- german
- germany
- answer in german
- deutsch
source_model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
gate_mode: hidden
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gagan3012/MetaModel_moe_multilingualv2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
JDB03/ppo-Huggy
|
JDB03
| 2024-01-09T00:30:50Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-09T00:27:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JDB03/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
samwell/Taxi-v3
|
samwell
| 2024-01-09T00:28:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-09T00:28:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="samwell/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Dotunnorth/a2c-PandaReachDense-v5
|
Dotunnorth
| 2024-01-08T23:59:28Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-08T23:54:03Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nikcheerla/amd-full-v1
|
nikcheerla
| 2024-01-08T23:49:53Z | 48 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] |
text-classification
| 2024-01-08T23:49:34Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: 'Your call has been forwarded to an automated voice messaging system. 9 '
- text: 'Your call has been forwarded to an automatic voice message system. 7133 '
- text: 'Triage Tronic Industries is not available. Record your message at the tone. '
- text: 'Hi. This is Sid. I''m sorry I missed your call. Please leave me your name
and number, and I will get back to you as soon as I can. Thank you, and have '
- text: 'The Google subscriber you have called is not available. Please leave a message
after the tone. '
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| machine | <ul><li>'Sorry. David Hello. Is not avail '</li><li>'To Mozaz. Please wait as we try to connect you. '</li><li>'Your call has been forwarded to an automated voice messaging system. 2 0 '</li></ul> |
| human | <ul><li>'Good afternoon. Sesame Workshop. How can I help you today? '</li><li>'This is Kenny. '</li><li>'Hello? '</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("nikcheerla/amd-full-v1")
# Run inference
preds = model("Your call has been forwarded to an automated voice messaging system. 9 ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 14.6725 | 207 |
| Label | Training Sample Count |
|:--------|:----------------------|
| human | 1495 |
| machine | 6401 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.197 | - |
| 1.0 | 9870 | 0.0001 | 0.0271 |
| 2.0 | 19740 | 0.0 | 0.0272 |
| **3.0** | **29610** | **0.0** | **0.0264** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
s3nh/s3nh-Sonya-Panda-7B-slerp-GGUF
|
s3nh
| 2024-01-08T23:48:19Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T23:27:44Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/s3nh/Sonya-Panda-7B-slerp).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
KMA-kmc1/distilbert-base-uncased-finetuned-emotion
|
KMA-kmc1
| 2024-01-08T23:45:57Z | 85 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-08T23:41:00Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9220402540427051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2249
- Accuracy: 0.922
- F1: 0.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8121 | 1.0 | 250 | 0.3311 | 0.896 | 0.8949 |
| 0.2499 | 2.0 | 500 | 0.2249 | 0.922 | 0.9220 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
alialhousseini/Reinforce-2
|
alialhousseini
| 2024-01-08T23:25:58Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-08T23:25:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 14.80 +/- 12.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vkamenski/Pixelcopter-PLE-v0
|
vkamenski
| 2024-01-08T23:15:47Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-08T23:15:38Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.80 +/- 20.45
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Yemmy1000/results
|
Yemmy1000
| 2024-01-08T23:13:15Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-08T23:01:31Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1
- Datasets 2.12.0
- Tokenizers 0.14.1
|
ntc-ai/SDXL-LoRA-slider.time-lapse-photography
|
ntc-ai
| 2024-01-08T23:12:45Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-08T23:12:42Z |
---
language:
- en
thumbnail: "images/evaluate/time lapse photography.../time lapse photography_17_3.0.png"
widget:
- text: time lapse photography
output:
url: images/time lapse photography_17_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_19_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_20_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_21_3.0.png
- text: time lapse photography
output:
url: images/time lapse photography_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "time lapse photography"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - time lapse photography (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/time lapse photography_17_-3.0.png" width=256 height=256 /> | <img src="images/time lapse photography_17_0.0.png" width=256 height=256 /> | <img src="images/time lapse photography_17_3.0.png" width=256 height=256 /> |
| <img src="images/time lapse photography_19_-3.0.png" width=256 height=256 /> | <img src="images/time lapse photography_19_0.0.png" width=256 height=256 /> | <img src="images/time lapse photography_19_3.0.png" width=256 height=256 /> |
| <img src="images/time lapse photography_20_-3.0.png" width=256 height=256 /> | <img src="images/time lapse photography_20_0.0.png" width=256 height=256 /> | <img src="images/time lapse photography_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
time lapse photography
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.time-lapse-photography', weight_name='time lapse photography.safetensors', adapter_name="time lapse photography")
# Activate the LoRA
pipe.set_adapters(["time lapse photography"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, time lapse photography"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 950+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
s3nh/Sonya-Panda-7B-slerp
|
s3nh
| 2024-01-08T23:12:19Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:NeuralNovel/Panda-7B-v0.1",
"base_model:merge:NeuralNovel/Panda-7B-v0.1",
"base_model:SanjiWatsuki/Sonya-7B",
"base_model:merge:SanjiWatsuki/Sonya-7B",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-08T22:53:45Z |
---
base_model:
- NeuralNovel/Panda-7B-v0.1
- SanjiWatsuki/Sonya-7B
tags:
- mergekit
- merge
license: openrail
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Sonya-Panda-7B-slerp

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NeuralNovel/Panda-7B-v0.1](https://huggingface.co/NeuralNovel/Panda-7B-v0.1)
* [SanjiWatsuki/Sonya-7B](https://huggingface.co/SanjiWatsuki/Sonya-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: SanjiWatsuki/Sonya-7B
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.5, 0.8, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.5, 0.2, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: NeuralNovel/Panda-7B-v0.1
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.