modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-26 12:31:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-26 12:31:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
remg1997/dynabench-sdxl10
|
remg1997
| 2023-09-08T05:56:16Z | 37 | 1 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-09-07T23:08:36Z |
---
license: openrail++
tags:
- text-to-image
- stable-diffusion
duplicated_from: stabilityai/stable-diffusion-xl-base-1.0
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
To use the whole base + refiner pipeline as an ensemble of experts you can run:
```py
from diffusers import DiffusionPipeline
import torch
# load both base & refiner
base = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
base.to("cuda")
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=base.text_encoder_2,
vae=base.vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
refiner.to("cuda")
# Define how many steps and what % of steps to be run on each experts (80/20) here
n_steps = 40
high_noise_frac = 0.8
prompt = "A majestic lion jumping from a big stone at night"
# run both experts
image = base(
prompt=prompt,
num_inference_steps=n_steps,
denoising_end=high_noise_frac,
output_type="latent",
).images
image = refiner(
prompt=prompt,
num_inference_steps=n_steps,
denoising_start=high_noise_frac,
image=image,
).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
### Optimum
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
#### OpenVINO
To install Optimum with the dependencies required for OpenVINO :
```bash
pip install optimum[openvino]
```
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionPipeline
+ from optimum.intel import OVStableDiffusionPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
+ pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl).
#### ONNX
To install Optimum with the dependencies required for ONNX Runtime inference :
```bash
pip install optimum[onnxruntime]
```
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionPipeline
+ from optimum.onnxruntime import ORTStableDiffusionPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
+ pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
skk412/rrsk
|
skk412
| 2023-09-08T05:42:55Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-09-08T05:40:05Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nfliu/deberta-v3-large_boolq
|
nfliu
| 2023-09-08T05:40:57Z | 209,595 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:boolq",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T05:55:24Z |
---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
datasets:
- boolq
metrics:
- accuracy
model-index:
- name: deberta-v3-large_boolq
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: boolq
type: boolq
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8834862385321101
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large_boolq
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the boolq dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4601
- Accuracy: 0.8835
## Model description
More information needed
## Example
```
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nfliu/deberta-v3-large_boolq")
tokenizer = AutoTokenizer.from_pretrained("nfliu/deberta-v3-large_boolq")
# Each example is a (question, context) pair.
examples = [
("Lake Tahoe is in California", "Lake Tahoe is a popular tourist spot in California."),
("Water is wet", "Contrary to popular belief, water is not wet.")
]
encoded_input = tokenizer(examples, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
model_output = model(**encoded_input)
probabilities = torch.softmax(model_output.logits, dim=-1).cpu().tolist()
probability_no = [round(prob[0], 2) for prob in probabilities]
probability_yes = [round(prob[1], 2) for prob in probabilities]
for example, p_no, p_yes in zip(examples, probability_no, probability_yes):
print(f"Question: {example[0]}")
print(f"Context: {example[1]}")
print(f"p(No | question, context): {p_no}")
print(f"p(Yes | question, context): {p_yes}")
print()
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.85 | 250 | 0.5306 | 0.8823 |
| 0.1151 | 1.69 | 500 | 0.4601 | 0.8835 |
| 0.1151 | 2.54 | 750 | 0.5897 | 0.8792 |
| 0.0656 | 3.39 | 1000 | 0.6477 | 0.8804 |
| 0.0656 | 4.24 | 1250 | 0.6847 | 0.8838 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sensenova/piccolo-base-zh
|
sensenova
| 2023-09-08T05:38:47Z | 1,330 | 33 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"mteb",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-09-04T07:04:26Z |
---
tags:
- mteb
model-index:
- name: piccolo-base-zh
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 49.16558217326158
- type: cos_sim_spearman
value: 51.4049475858823
- type: euclidean_pearson
value: 49.85853741070363
- type: euclidean_spearman
value: 51.501428092542234
- type: manhattan_pearson
value: 49.746099634926296
- type: manhattan_spearman
value: 51.41081804320127
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 52.385361699031854
- type: cos_sim_spearman
value: 52.59114913702212
- type: euclidean_pearson
value: 54.994530439418355
- type: euclidean_spearman
value: 52.54102886188004
- type: manhattan_pearson
value: 54.9503071669608
- type: manhattan_spearman
value: 52.51465652540901
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.236
- type: f1
value: 39.43040092463147
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 60.98952187211432
- type: cos_sim_spearman
value: 62.68189713123115
- type: euclidean_pearson
value: 61.089426749761344
- type: euclidean_spearman
value: 62.41743375544581
- type: manhattan_pearson
value: 61.14747216341409
- type: manhattan_spearman
value: 62.488918956547046
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 38.36392300667918
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 35.645927581489175
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 85.25085782849087
- type: mrr
value: 87.77154761904762
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 86.15357754080844
- type: mrr
value: 88.53547619047617
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.683
- type: map_at_10
value: 35.522999999999996
- type: map_at_100
value: 37.456
- type: map_at_1000
value: 37.576
- type: map_at_3
value: 31.584
- type: map_at_5
value: 33.684999999999995
- type: mrr_at_1
value: 36.459
- type: mrr_at_10
value: 44.534
- type: mrr_at_100
value: 45.6
- type: mrr_at_1000
value: 45.647
- type: mrr_at_3
value: 42.186
- type: mrr_at_5
value: 43.482
- type: ndcg_at_1
value: 36.459
- type: ndcg_at_10
value: 42.025
- type: ndcg_at_100
value: 49.754
- type: ndcg_at_1000
value: 51.815999999999995
- type: ndcg_at_3
value: 37.056
- type: ndcg_at_5
value: 38.962
- type: precision_at_1
value: 36.459
- type: precision_at_10
value: 9.485000000000001
- type: precision_at_100
value: 1.567
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.13
- type: precision_at_5
value: 15.209
- type: recall_at_1
value: 23.683
- type: recall_at_10
value: 52.190999999999995
- type: recall_at_100
value: 84.491
- type: recall_at_1000
value: 98.19600000000001
- type: recall_at_3
value: 37.09
- type: recall_at_5
value: 43.262
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 74.20324714371618
- type: cos_sim_ap
value: 82.32631646194994
- type: cos_sim_f1
value: 76.64052827073876
- type: cos_sim_precision
value: 68.58725761772854
- type: cos_sim_recall
value: 86.83656768763151
- type: dot_accuracy
value: 70.33072760072159
- type: dot_ap
value: 77.46972172609794
- type: dot_f1
value: 73.6668924804026
- type: dot_precision
value: 62.84676354029062
- type: dot_recall
value: 88.98760813654431
- type: euclidean_accuracy
value: 74.78051713770296
- type: euclidean_ap
value: 82.65778389584023
- type: euclidean_f1
value: 77.1843623157445
- type: euclidean_precision
value: 71.05211406096362
- type: euclidean_recall
value: 84.47509936871639
- type: manhattan_accuracy
value: 74.76849067949489
- type: manhattan_ap
value: 82.55694030572194
- type: manhattan_f1
value: 77.1776459569154
- type: manhattan_precision
value: 69.5423855963991
- type: manhattan_recall
value: 86.69628244096329
- type: max_accuracy
value: 74.78051713770296
- type: max_ap
value: 82.65778389584023
- type: max_f1
value: 77.1843623157445
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 72.99799999999999
- type: map_at_10
value: 81.271
- type: map_at_100
value: 81.53399999999999
- type: map_at_1000
value: 81.535
- type: map_at_3
value: 80.049
- type: map_at_5
value: 80.793
- type: mrr_at_1
value: 73.13
- type: mrr_at_10
value: 81.193
- type: mrr_at_100
value: 81.463
- type: mrr_at_1000
value: 81.464
- type: mrr_at_3
value: 80.067
- type: mrr_at_5
value: 80.741
- type: ndcg_at_1
value: 73.34
- type: ndcg_at_10
value: 84.503
- type: ndcg_at_100
value: 85.643
- type: ndcg_at_1000
value: 85.693
- type: ndcg_at_3
value: 82.135
- type: ndcg_at_5
value: 83.401
- type: precision_at_1
value: 73.34
- type: precision_at_10
value: 9.536
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.54
- type: precision_at_5
value: 18.398
- type: recall_at_1
value: 72.99799999999999
- type: recall_at_10
value: 94.31
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 99.789
- type: recall_at_3
value: 87.935
- type: recall_at_5
value: 90.991
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.537
- type: map_at_10
value: 81.292
- type: map_at_100
value: 84.031
- type: map_at_1000
value: 84.066
- type: map_at_3
value: 56.571000000000005
- type: map_at_5
value: 71.082
- type: mrr_at_1
value: 91.2
- type: mrr_at_10
value: 93.893
- type: mrr_at_100
value: 93.955
- type: mrr_at_1000
value: 93.95700000000001
- type: mrr_at_3
value: 93.61699999999999
- type: mrr_at_5
value: 93.767
- type: ndcg_at_1
value: 91.2
- type: ndcg_at_10
value: 88.255
- type: ndcg_at_100
value: 90.813
- type: ndcg_at_1000
value: 91.144
- type: ndcg_at_3
value: 87.435
- type: ndcg_at_5
value: 85.961
- type: precision_at_1
value: 91.2
- type: precision_at_10
value: 42.14
- type: precision_at_100
value: 4.817
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 78.467
- type: precision_at_5
value: 65.75999999999999
- type: recall_at_1
value: 26.537
- type: recall_at_10
value: 89.262
- type: recall_at_100
value: 97.783
- type: recall_at_1000
value: 99.49799999999999
- type: recall_at_3
value: 58.573
- type: recall_at_5
value: 75.154
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 48.5
- type: map_at_10
value: 57.898
- type: map_at_100
value: 58.599000000000004
- type: map_at_1000
value: 58.616
- type: map_at_3
value: 55.1
- type: map_at_5
value: 56.80500000000001
- type: mrr_at_1
value: 48.5
- type: mrr_at_10
value: 57.898
- type: mrr_at_100
value: 58.599000000000004
- type: mrr_at_1000
value: 58.616
- type: mrr_at_3
value: 55.1
- type: mrr_at_5
value: 56.80500000000001
- type: ndcg_at_1
value: 48.5
- type: ndcg_at_10
value: 62.876
- type: ndcg_at_100
value: 66.00200000000001
- type: ndcg_at_1000
value: 66.467
- type: ndcg_at_3
value: 57.162
- type: ndcg_at_5
value: 60.263999999999996
- type: precision_at_1
value: 48.5
- type: precision_at_10
value: 7.870000000000001
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 21.032999999999998
- type: precision_at_5
value: 14.14
- type: recall_at_1
value: 48.5
- type: recall_at_10
value: 78.7
- type: recall_at_100
value: 92.7
- type: recall_at_1000
value: 96.39999999999999
- type: recall_at_3
value: 63.1
- type: recall_at_5
value: 70.7
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 44.34782608695652
- type: f1
value: 36.401426200836205
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.25891181988743
- type: ap
value: 50.54636280166089
- type: f1
value: 78.55080202541332
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.02878561337955
- type: cos_sim_spearman
value: 75.39509553139982
- type: euclidean_pearson
value: 73.92598696939956
- type: euclidean_spearman
value: 75.5471147196853
- type: manhattan_pearson
value: 73.88049486090739
- type: manhattan_spearman
value: 75.51361990583285
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 64.739
- type: map_at_10
value: 74.039
- type: map_at_100
value: 74.38
- type: map_at_1000
value: 74.39099999999999
- type: map_at_3
value: 72.074
- type: map_at_5
value: 73.29299999999999
- type: mrr_at_1
value: 66.92
- type: mrr_at_10
value: 74.636
- type: mrr_at_100
value: 74.94
- type: mrr_at_1000
value: 74.95
- type: mrr_at_3
value: 72.911
- type: mrr_at_5
value: 73.981
- type: ndcg_at_1
value: 66.92
- type: ndcg_at_10
value: 77.924
- type: ndcg_at_100
value: 79.471
- type: ndcg_at_1000
value: 79.73400000000001
- type: ndcg_at_3
value: 74.17200000000001
- type: ndcg_at_5
value: 76.236
- type: precision_at_1
value: 66.92
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 27.989000000000004
- type: precision_at_5
value: 17.874000000000002
- type: recall_at_1
value: 64.739
- type: recall_at_10
value: 89.324
- type: recall_at_100
value: 96.342
- type: recall_at_1000
value: 98.38900000000001
- type: recall_at_3
value: 79.378
- type: recall_at_5
value: 84.28099999999999
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.97108271687962
- type: f1
value: 66.8625981386677
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.32212508406187
- type: f1
value: 73.33875034670166
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 49.0
- type: map_at_10
value: 55.022999999999996
- type: map_at_100
value: 55.550999999999995
- type: map_at_1000
value: 55.608000000000004
- type: map_at_3
value: 53.417
- type: map_at_5
value: 54.372
- type: mrr_at_1
value: 49.3
- type: mrr_at_10
value: 55.176
- type: mrr_at_100
value: 55.703
- type: mrr_at_1000
value: 55.76
- type: mrr_at_3
value: 53.567
- type: mrr_at_5
value: 54.522000000000006
- type: ndcg_at_1
value: 49.0
- type: ndcg_at_10
value: 58.089999999999996
- type: ndcg_at_100
value: 60.988
- type: ndcg_at_1000
value: 62.580999999999996
- type: ndcg_at_3
value: 54.803000000000004
- type: ndcg_at_5
value: 56.508
- type: precision_at_1
value: 49.0
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 0.8210000000000001
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.6
- type: precision_at_5
value: 12.58
- type: recall_at_1
value: 49.0
- type: recall_at_10
value: 67.80000000000001
- type: recall_at_100
value: 82.1
- type: recall_at_1000
value: 94.8
- type: recall_at_3
value: 58.8
- type: recall_at_5
value: 62.9
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 28.87237408060796
- type: mrr
value: 27.83015873015873
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 70.25
- type: f1
value: 70.29055400149645
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 65.56578234975636
- type: cos_sim_ap
value: 70.89354058570412
- type: cos_sim_f1
value: 71.21024370095002
- type: cos_sim_precision
value: 58.48032564450475
- type: cos_sim_recall
value: 91.02428722280888
- type: dot_accuracy
value: 64.86193827828912
- type: dot_ap
value: 70.17697803463875
- type: dot_f1
value: 70.68676716917922
- type: dot_precision
value: 58.57043719639139
- type: dot_recall
value: 89.1235480464625
- type: euclidean_accuracy
value: 64.86193827828912
- type: euclidean_ap
value: 70.26847152773904
- type: euclidean_f1
value: 70.9984152139461
- type: euclidean_precision
value: 56.81674064679771
- type: euclidean_recall
value: 94.61457233368532
- type: manhattan_accuracy
value: 65.40335679480238
- type: manhattan_ap
value: 70.22941558736018
- type: manhattan_f1
value: 71.09712937475423
- type: manhattan_precision
value: 56.64160401002506
- type: manhattan_recall
value: 95.45934530095037
- type: max_accuracy
value: 65.56578234975636
- type: max_ap
value: 70.89354058570412
- type: max_f1
value: 71.21024370095002
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.92999999999999
- type: ap
value: 87.16059195012956
- type: f1
value: 89.90917477839415
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 27.74161502387672
- type: cos_sim_spearman
value: 31.58353529723325
- type: euclidean_pearson
value: 32.43729673844635
- type: euclidean_spearman
value: 31.59527486602242
- type: manhattan_pearson
value: 32.37467059678786
- type: manhattan_spearman
value: 31.44408004951894
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 36.233749845501194
- type: cos_sim_spearman
value: 36.47808586229587
- type: euclidean_pearson
value: 32.663447466546806
- type: euclidean_spearman
value: 34.45830454037139
- type: manhattan_pearson
value: 32.80239212096335
- type: manhattan_spearman
value: 34.581060433895125
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 63.05131937664673
- type: cos_sim_spearman
value: 66.51353746725948
- type: euclidean_pearson
value: 61.24016998745561
- type: euclidean_spearman
value: 66.07115266049276
- type: manhattan_pearson
value: 64.55660243659054
- type: manhattan_spearman
value: 66.80282149562386
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.45533692882996
- type: cos_sim_spearman
value: 70.6045637565602
- type: euclidean_pearson
value: 72.75588977483554
- type: euclidean_spearman
value: 73.36630581886473
- type: manhattan_pearson
value: 72.72517409326954
- type: manhattan_spearman
value: 73.35358940437355
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.45779474032288
- type: mrr
value: 76.0782192023729
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.458
- type: map_at_10
value: 74.355
- type: map_at_100
value: 78.158
- type: map_at_1000
value: 78.233
- type: map_at_3
value: 52.2
- type: map_at_5
value: 64.14
- type: mrr_at_1
value: 88.37
- type: mrr_at_10
value: 91.117
- type: mrr_at_100
value: 91.231
- type: mrr_at_1000
value: 91.23599999999999
- type: mrr_at_3
value: 90.645
- type: mrr_at_5
value: 90.948
- type: ndcg_at_1
value: 88.37
- type: ndcg_at_10
value: 82.384
- type: ndcg_at_100
value: 86.431
- type: ndcg_at_1000
value: 87.163
- type: ndcg_at_3
value: 83.993
- type: ndcg_at_5
value: 82.411
- type: precision_at_1
value: 88.37
- type: precision_at_10
value: 41.131
- type: precision_at_100
value: 4.9799999999999995
- type: precision_at_1000
value: 0.515
- type: precision_at_3
value: 73.651
- type: precision_at_5
value: 61.634
- type: recall_at_1
value: 26.458
- type: recall_at_10
value: 81.3
- type: recall_at_100
value: 94.342
- type: recall_at_1000
value: 98.103
- type: recall_at_3
value: 54.020999999999994
- type: recall_at_5
value: 67.781
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 46.814
- type: f1
value: 45.580027683507666
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 61.43613064816144
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 53.01838461793776
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 59.3
- type: map_at_10
value: 69.158
- type: map_at_100
value: 69.60300000000001
- type: map_at_1000
value: 69.611
- type: map_at_3
value: 67.467
- type: map_at_5
value: 68.432
- type: mrr_at_1
value: 59.199999999999996
- type: mrr_at_10
value: 69.108
- type: mrr_at_100
value: 69.553
- type: mrr_at_1000
value: 69.56099999999999
- type: mrr_at_3
value: 67.417
- type: mrr_at_5
value: 68.382
- type: ndcg_at_1
value: 59.3
- type: ndcg_at_10
value: 73.54
- type: ndcg_at_100
value: 75.652
- type: ndcg_at_1000
value: 75.868
- type: ndcg_at_3
value: 70.074
- type: ndcg_at_5
value: 71.808
- type: precision_at_1
value: 59.3
- type: precision_at_10
value: 8.709999999999999
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 25.867
- type: precision_at_5
value: 16.36
- type: recall_at_1
value: 59.3
- type: recall_at_10
value: 87.1
- type: recall_at_100
value: 96.89999999999999
- type: recall_at_1000
value: 98.6
- type: recall_at_3
value: 77.60000000000001
- type: recall_at_5
value: 81.8
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.69999999999999
- type: ap
value: 66.65020528563207
- type: f1
value: 83.00542769081453
---
## piccolo-base-zh
piccolo是一个通用embedding模型(中文), 由来自商汤科技的通用模型组完成训练。piccolo借鉴了E5以及GTE的训练流程,采用了两阶段的训练方式。
在第一阶段中,我们搜集和爬取了4亿的中文文本对(可视为弱监督文本对数据),并采用二元组的softmax对比学习损失来优化模型。
在第二阶段中,我们搜集整理了2000万人工标注的中文文本对(精标数据),并采用带有难负样本的三元组的softmax对比学习损失来帮助模型更好地优化。
目前,我们提供了piccolo-base-zh和piccolo-large-zh两个模型。
piccolo is a general text embedding model(chinese), powered by General Model Group from SenseTime Research.
Inspired from E5 and GTE, piccolo is trained using a two stage pipeline. On the first stage, we collect and crawl 400 million weakly supervised Chinese text pairs from the Internet,
and train the model with the pair(text and text pos) softmax contrastive loss.
On the second stage, we collect 20 million human labeled chinese text pairs dataset, and finetune the model with tiplet (text, text_pos, text_neg) contrastive loss.
Currently here we offer two different sizes of models, including piccolo-base-zh, piccolo-large-zh.
## Metric
我们将piccolo与其他的开源embedding模型在CMTEB榜单上进行了比较,请参考CMTEB榜单。我们在eval文件夹中提供了复现结果的脚本。
We compared the performance of the piccolo with other embedding models on the C-MTEB benchmark. please refer to the C-MTEB leaderboard.
we provide scripts in "eval" folder for results reproducing.
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**piccolo-large-zh**] | 0.65 | 1024 | 512 | **64.11** | 67.03 | 47.04 | 78.38 | 65.98 | 70.93 | 58.02 |
| [bge-large-zh]| 1.3 | 1024| 512 | 63.96 | 68.32 | 48.39 | 78.94 | 65.11 | 71.52 | 54.98 |
| [**piccolo-base-zh**]| 0.2 | 768 | 512 | **63.66** | 66.98 | 47.12 | 76.61 | 66.68 | 71.2 | 55.9 |
| [bge-large-zh-no-instruct]| 1.3 | 1024 | 512 | 63.4 | 68.58 | 50.01 | 76.77 | 64.9 | 70.54 | 53 |
| [bge-base-zh]| 0.41 | 768 | 512 | 62.8 | 67.07 | 47.64 | 77.5 | 64.91 | 69.53 | 54.12 |
## Usage
在sentence-transformer package中可以很容易地调用piccolo模型
```python
# for s2s dataset, you can use piccolo as below
# 对于短对短数据集,下面是通用的使用方式
from sentence_transformers import SentenceTransformer
sentences = ["数据1", "数据2"]
model = SentenceTransformer('sensenova/piccolo-base-zh')
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p dataset, we recommend to add instruction for passage retrieval
# 对于短对长数据集,我们推荐添加instruction,来帮助模型更好地进行检索。
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["doc_1", "doc_2"]
model = SentenceTransformer('sensenova/piccolo-base-zh')
q_embeddings = model.encode(["查询:" + q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(["结果:" + p for p in passages], normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
## Training Detail
### pretrain
pretrain 通常不需要太大的max length, 推荐128。小的max length用以提高batch size,加快训练速度,从而适应大规模数据。
pretrain 损失我们采用二元组contrastive loss,不加入hard negative, 直接采用inbatch negative,在实际训练中,我们使用了32张40G A100进行训练,单卡的batch size为1024。
Pretrain usually does not require a large max length, and 128 is recommended. A small max length is used to increase batch size and speed up training to adapt to large-scale data.
We use binary contrastive loss for pretrain loss, without adding hard negative, and directly use inbatch negative. In actual training, we used 32 40G A100 for training, and the batch size of a single card is 1024.
### finetune
finetune 通常会将 max length扩增到512。用以适应更大长度的文本输入,finetune时会多sample S2P的数据,以增强模型在retrieval任务上的性能。
finetune 损失采用三元组contrastive loss,加入hard negative,neg num通常设置为2-7,loss计算方式可以参考GTE里的improved contrastive loss。
注意: 我们给query和passage设置了不同的max length,query的max length始终保持在64。
For finetuning, we usually expands the max length to 512. To adapt to larger length text input, finetune will sample more S2P data to enhance the performance of the model on retrieval tasks.
The finetune loss uses triple contrastive loss, adding hard negative. Neg num is usually set to 2-7. The loss calculation method can refer to the improved contrastive loss in GTE.
Note: We set different max lengths for query and passage, and the max length of query is always kept at 64.
### Others
一些有用的trick:
1. 减小显存的方式: fp16 + gradient checkpointing + ZERO STAGE1 (stage2 不支持双塔结构下的gradient checkpointing) 相关issue见: https://github.com/microsoft/DeepSpeed/issues/988
2. dataset sampler,我们采用了M3E的dataset sampler,用以保证每个batch里的样本均来自于一个dataset,负样本更有价值。
3. instruction。instruction在我们的实验中对retrieval任务有非常大的性能提升,我们在每个训练样本前都加入'查询: '和'结果: '这样的instruction。
some useful tricks:
1. The way to reduce memory usage: fp16 + gradient checkpointing + ZERO STAGE1 (stage2 does not support gradient checkpointing under the double-tower structure) For related issues, see: https://github.com/microsoft/DeepSpeed/issues/ 988
2. Dataset sampler, we use M3E's dataset sampler to ensure that the samples in each batch come from a dataset, and negative samples are more valuable.
3. instruction. Instruction has greatly improved the performance of the retrieval task in our experiments. We added instructions like 'query: ' and 'result: ' before each training sample.
## Reference
这里我们列出了我们参考过的embedding项目和论文
1. [M3E](https://github.com/wangyuxinwhy/uniem)。非常棒的中文开源embedding项目,收集和整理了较多的中文高质量数据集,uniem也是一个不错的框架。
2. [Text2vec](https://github.com/shibing624/text2vec)。另一个一个非常棒的中文开源embedding项目。
3. [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)。智源AI开源的embedding模型,收集和整理了CMTEB benchmark,填补了中文embedding系统性评测的空缺。
4. [E5](https://github.com/microsoft/unilm/tree/master/e5)。来自微软的一篇文章,有非常详细的消融实验以及数据处理过滤细节。
5. [GTE](https://huggingface.co/thenlper/gte-base)。一篇来自阿里达摩的embedding论文。
Here we list the embedding projects and papers we have referenced
1. [M3E](https://github.com/wangyuxinwhy/uniem). A great Chinese open source embedding project that collects and organizes a large number of high-quality Chinese datasets. Uniem is also a good framework.
2. [Text2vec](https://github.com/shibing624/text2vec). Another great Chinese open source embedding project.
3. [Flag Embedding](https://github.com/FlagOpen/FlagEmbedding). Zhiyuan AI’s open source embedding model.They collect and organize CMTEB benchmark, filling the gap in systematic evaluation of Chinese embeddings.
4. [E5](https://github.com/microsoft/unilm/tree/master/e5). Powerd by microsoft,producing very detailed ablation experiments and data processing filtering details.
5. [GTE](https://huggingface.co/thenlper/gte-base). An embedding paper from Alibaba Damo.
## License
Piccolo 使用 MIT License,免费商用。
Piccolo use MIT License. It can be used for commercial purposes free of charge.
## Acknowledgement
piccolo 由来自商汤科技研究院的通用模型组完成训练,[Jinkin](https://huggingface.co/Jinkin) 完成了代码实现和模型训练, [Jinkin](https://huggingface.co/Jinkin),
[CCCCxxx](https://huggingface.co/CCCCxxx) 一起完成了数据搜集、整理和评测工作. 项目由 [Gaomengya](https://huggingface.co/gaomengya) 和 [chaorenwu111](https://huggingface.co/chaorenwu111) 主导。
同时,感谢[lux0933](https://huggingface.co/lux0933)以及[yangkai001](https://huggingface.co/yangkai001)的交流与讨论,提供了非常多有用的建议。
piccolo is powered by Genral Model group from SenseTime Research.
[Jinkin](https://huggingface.co/Jinkin) complete code implementation and model training.
[Jinkin](https://huggingface.co/Jinkin), [CCCCxxx](https://huggingface.co/CCCCxxx) completed the data collection、processing and model evaluation together.
Project is led by [Gaomengya](https://huggingface.co/gaomengya) and [chaorenwu111](https://huggingface.co/chaorenwu111).
At the same time, thank [lux0933](https://huggingface.co/lux0933) and [yangkai001](https://huggingface.co/yangkai001) for the discussion, which provide a lot of useful suggestions.
|
omthkkr/whisper-tiny-en-US
|
omthkkr
| 2023-09-08T05:28:42Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-08T04:36:36Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-en-US
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MINDS14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.36363636363636365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-en-US
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the MINDS14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7138
- Wer Ortho: 0.3652
- Wer: 0.3636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0007 | 17.86 | 500 | 0.6509 | 0.3455 | 0.3412 |
| 0.0002 | 35.71 | 1000 | 0.7138 | 0.3652 | 0.3636 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jasmineplows/ppo-LunarLander-v2
|
jasmineplows
| 2023-09-08T04:55:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-08T04:54:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 237.96 +/- 38.11
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Onutoa/1_7e-3_1_0.9
|
Onutoa
| 2023-09-08T04:42:59Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-08T01:42:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_7e-3_1_0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_7e-3_1_0.9
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2572
- Accuracy: 0.7505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.007
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0455 | 1.0 | 590 | 1.6132 | 0.3786 |
| 0.9655 | 2.0 | 1180 | 0.6681 | 0.6217 |
| 0.7392 | 3.0 | 1770 | 0.5308 | 0.4557 |
| 0.7812 | 4.0 | 2360 | 0.4957 | 0.5654 |
| 0.7422 | 5.0 | 2950 | 1.2018 | 0.6217 |
| 0.7053 | 6.0 | 3540 | 0.7295 | 0.4804 |
| 0.7016 | 7.0 | 4130 | 1.1783 | 0.3804 |
| 0.6381 | 8.0 | 4720 | 0.3895 | 0.6541 |
| 0.5364 | 9.0 | 5310 | 0.5057 | 0.6768 |
| 0.5598 | 10.0 | 5900 | 0.3659 | 0.6798 |
| 0.5779 | 11.0 | 6490 | 0.5754 | 0.6740 |
| 0.4901 | 12.0 | 7080 | 0.3128 | 0.7055 |
| 0.5212 | 13.0 | 7670 | 0.2977 | 0.7083 |
| 0.479 | 14.0 | 8260 | 1.0718 | 0.6352 |
| 0.4701 | 15.0 | 8850 | 0.4170 | 0.7138 |
| 0.4286 | 16.0 | 9440 | 0.3207 | 0.6985 |
| 0.4164 | 17.0 | 10030 | 0.2996 | 0.7086 |
| 0.3649 | 18.0 | 10620 | 0.3665 | 0.6823 |
| 0.4102 | 19.0 | 11210 | 0.2847 | 0.7300 |
| 0.3819 | 20.0 | 11800 | 0.3577 | 0.6731 |
| 0.3755 | 21.0 | 12390 | 0.5441 | 0.6058 |
| 0.3373 | 22.0 | 12980 | 0.6394 | 0.5657 |
| 0.3512 | 23.0 | 13570 | 0.2683 | 0.7159 |
| 0.3124 | 24.0 | 14160 | 0.2775 | 0.7269 |
| 0.3029 | 25.0 | 14750 | 0.3565 | 0.7333 |
| 0.2864 | 26.0 | 15340 | 0.5595 | 0.6318 |
| 0.3107 | 27.0 | 15930 | 0.8309 | 0.5557 |
| 0.2674 | 28.0 | 16520 | 0.2615 | 0.7394 |
| 0.2927 | 29.0 | 17110 | 0.6786 | 0.7049 |
| 0.2672 | 30.0 | 17700 | 0.2945 | 0.7407 |
| 0.2595 | 31.0 | 18290 | 0.3927 | 0.7327 |
| 0.2646 | 32.0 | 18880 | 0.2765 | 0.7162 |
| 0.2604 | 33.0 | 19470 | 0.2854 | 0.7199 |
| 0.2364 | 34.0 | 20060 | 0.3032 | 0.7034 |
| 0.2465 | 35.0 | 20650 | 0.3092 | 0.7456 |
| 0.2334 | 36.0 | 21240 | 0.5941 | 0.7248 |
| 0.2392 | 37.0 | 21830 | 0.3794 | 0.6875 |
| 0.2303 | 38.0 | 22420 | 0.3033 | 0.7235 |
| 0.2258 | 39.0 | 23010 | 0.3078 | 0.7266 |
| 0.2189 | 40.0 | 23600 | 0.3052 | 0.7425 |
| 0.2126 | 41.0 | 24190 | 0.3418 | 0.7352 |
| 0.2213 | 42.0 | 24780 | 0.2660 | 0.7382 |
| 0.2115 | 43.0 | 25370 | 0.4016 | 0.7364 |
| 0.2109 | 44.0 | 25960 | 0.3010 | 0.7456 |
| 0.2391 | 45.0 | 26550 | 0.4426 | 0.7303 |
| 0.2115 | 46.0 | 27140 | 0.2762 | 0.7407 |
| 0.2014 | 47.0 | 27730 | 0.2864 | 0.7437 |
| 0.1925 | 48.0 | 28320 | 0.2657 | 0.7382 |
| 0.2017 | 49.0 | 28910 | 0.2866 | 0.7505 |
| 0.2145 | 50.0 | 29500 | 0.3055 | 0.7202 |
| 0.1933 | 51.0 | 30090 | 0.5254 | 0.6550 |
| 0.2115 | 52.0 | 30680 | 0.2996 | 0.7477 |
| 0.1893 | 53.0 | 31270 | 0.2759 | 0.7471 |
| 0.1834 | 54.0 | 31860 | 0.2543 | 0.7440 |
| 0.1828 | 55.0 | 32450 | 0.2676 | 0.7492 |
| 0.1801 | 56.0 | 33040 | 0.2680 | 0.7505 |
| 0.1699 | 57.0 | 33630 | 0.2554 | 0.7440 |
| 0.1748 | 58.0 | 34220 | 0.3117 | 0.7505 |
| 0.1842 | 59.0 | 34810 | 0.3374 | 0.7483 |
| 0.1684 | 60.0 | 35400 | 0.2781 | 0.7471 |
| 0.1695 | 61.0 | 35990 | 0.3007 | 0.7434 |
| 0.177 | 62.0 | 36580 | 0.2816 | 0.7443 |
| 0.1586 | 63.0 | 37170 | 0.2587 | 0.7422 |
| 0.1643 | 64.0 | 37760 | 0.2751 | 0.7450 |
| 0.1719 | 65.0 | 38350 | 0.2875 | 0.7489 |
| 0.167 | 66.0 | 38940 | 0.2729 | 0.7434 |
| 0.1644 | 67.0 | 39530 | 0.2623 | 0.7373 |
| 0.16 | 68.0 | 40120 | 0.2534 | 0.7407 |
| 0.156 | 69.0 | 40710 | 0.2525 | 0.7419 |
| 0.1549 | 70.0 | 41300 | 0.2565 | 0.7297 |
| 0.1598 | 71.0 | 41890 | 0.2479 | 0.7425 |
| 0.1666 | 72.0 | 42480 | 0.3158 | 0.7462 |
| 0.1498 | 73.0 | 43070 | 0.2722 | 0.7456 |
| 0.1495 | 74.0 | 43660 | 0.3985 | 0.7428 |
| 0.153 | 75.0 | 44250 | 0.3153 | 0.7477 |
| 0.1576 | 76.0 | 44840 | 0.3075 | 0.7459 |
| 0.1536 | 77.0 | 45430 | 0.2629 | 0.7468 |
| 0.1508 | 78.0 | 46020 | 0.2489 | 0.7434 |
| 0.1502 | 79.0 | 46610 | 0.2671 | 0.7523 |
| 0.1509 | 80.0 | 47200 | 0.2771 | 0.7523 |
| 0.1352 | 81.0 | 47790 | 0.2611 | 0.7425 |
| 0.1438 | 82.0 | 48380 | 0.2556 | 0.7388 |
| 0.1407 | 83.0 | 48970 | 0.2809 | 0.7263 |
| 0.1417 | 84.0 | 49560 | 0.2580 | 0.7459 |
| 0.1404 | 85.0 | 50150 | 0.2557 | 0.7486 |
| 0.1437 | 86.0 | 50740 | 0.2821 | 0.7498 |
| 0.1368 | 87.0 | 51330 | 0.2766 | 0.7508 |
| 0.14 | 88.0 | 51920 | 0.2664 | 0.7498 |
| 0.1351 | 89.0 | 52510 | 0.2592 | 0.7450 |
| 0.1338 | 90.0 | 53100 | 0.2895 | 0.7514 |
| 0.1361 | 91.0 | 53690 | 0.2638 | 0.7526 |
| 0.1356 | 92.0 | 54280 | 0.2470 | 0.7468 |
| 0.1356 | 93.0 | 54870 | 0.2694 | 0.7511 |
| 0.1349 | 94.0 | 55460 | 0.2833 | 0.7502 |
| 0.1331 | 95.0 | 56050 | 0.2940 | 0.7477 |
| 0.131 | 96.0 | 56640 | 0.2760 | 0.7492 |
| 0.1311 | 97.0 | 57230 | 0.2520 | 0.7465 |
| 0.1282 | 98.0 | 57820 | 0.2604 | 0.7489 |
| 0.1258 | 99.0 | 58410 | 0.2518 | 0.7459 |
| 0.1331 | 100.0 | 59000 | 0.2572 | 0.7505 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rideadragon/ppo-Huggy
|
rideadragon
| 2023-09-08T04:40:25Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-08T04:40:21Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rideadragon/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Gayathri142214002/Finetune_Pegasus_1
|
Gayathri142214002
| 2023-09-08T04:39:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-18T05:19:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Finetune_Pegasus_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetune_Pegasus_1
This model is a fine-tuned version of [tuner007/pegasus_paraphrase](https://huggingface.co/tuner007/pegasus_paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7293 | 0.21 | 10 | 1.2156 |
| 1.3661 | 0.41 | 20 | 1.1203 |
| 1.3897 | 0.62 | 30 | 1.0665 |
| 1.3356 | 0.82 | 40 | 1.0304 |
| 1.171 | 1.03 | 50 | 1.0098 |
| 0.8665 | 1.23 | 60 | 1.0062 |
| 0.7864 | 1.44 | 70 | 1.0266 |
| 0.8785 | 1.64 | 80 | 1.0190 |
| 1.0596 | 1.85 | 90 | 1.0218 |
| 1.0386 | 2.05 | 100 | 1.0213 |
| 0.7452 | 2.26 | 110 | 1.0639 |
| 0.6807 | 2.46 | 120 | 1.0619 |
| 0.5764 | 2.67 | 130 | 1.0530 |
| 0.87 | 2.87 | 140 | 1.0571 |
| 0.7724 | 3.08 | 150 | 1.0563 |
| 0.5847 | 3.28 | 160 | 1.0692 |
| 0.6053 | 3.49 | 170 | 1.0652 |
| 0.6416 | 3.69 | 180 | 1.0531 |
| 0.6392 | 3.9 | 190 | 1.0416 |
| 0.6138 | 4.1 | 200 | 1.0489 |
| 0.6093 | 4.31 | 210 | 1.0668 |
| 0.5484 | 4.51 | 220 | 1.0843 |
| 0.6082 | 4.72 | 230 | 1.0771 |
| 0.56 | 4.92 | 240 | 1.0745 |
| 0.5796 | 5.13 | 250 | 1.0770 |
| 0.6597 | 5.33 | 260 | 1.0722 |
| 0.4834 | 5.54 | 270 | 1.0726 |
| 0.4232 | 5.74 | 280 | 1.0682 |
| 0.5432 | 5.95 | 290 | 1.0769 |
| 0.5944 | 6.15 | 300 | 1.0851 |
| 0.4663 | 6.36 | 310 | 1.0884 |
| 0.4568 | 6.56 | 320 | 1.0915 |
| 0.4565 | 6.77 | 330 | 1.0942 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lightthief/tenoch
|
lightthief
| 2023-09-08T04:20:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-08T04:20:36Z |
---
license: creativeml-openrail-m
---
|
yunhuan929/falcon_180b
|
yunhuan929
| 2023-09-08T04:00:07Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:1911.02150",
"arxiv:2101.00027",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2205.14135",
"arxiv:2306.01116",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-08T03:58:21Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
- de
- es
- fr
inference: false
license: unknown
extra_gated_heading: "Acknowledge license to access the repository"
extra_gated_prompt: "You agree to the [Falcon-180B TII license](https://huggingface.co/spaces/tiiuae/falcon-180b-license/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/tiiuae/falcon-180b-license/blob/main/ACCEPTABLE_USE_POLICY.txt)."
extra_gated_button_content: "I agree to the terms and conditions of the Falcon-180B TII license and to the acceptable use policy"
---
# 🚀 Falcon-180B
**Falcon-180B is a 180B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 3,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the [Falcon-180B TII License](https://huggingface.co/spaces/tiiuae/falcon-180b-license/blob/main/LICENSE.txt) and [Acceptable Use Policy](https://huggingface.co/spaces/tiiuae/falcon-180b-license/blob/main/ACCEPTABLE_USE_POLICY.txt).**
*Paper coming soon* 😊
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://hf.co/blog/falcon-180b) or this [one](https://huggingface.co/blog/falcon) from the release of the 40B!
Note that since the 180B is larger than what can easily be handled with `transformers`+`acccelerate`, we recommend using [Text Generation Inference](https://github.com/huggingface/text-generation-inference).
You will need **at least 400GB of memory** to swiftly run inference with Falcon-180B.
## Why use Falcon-180B?
* **It is the best open-access model currently available, and one of the best model overall.** Falcon-180B outperforms [LLaMA-2](https://huggingface.co/meta-llama/Llama-2-70b-hf), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive license allowing for commercial use**.
* ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-180B-Chat](https://huggingface.co/tiiuae/falcon-180b-chat).
💸 **Looking for a smaller, less expensive model?** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) are Falcon-180B's little brothers!
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
# Model Card for Falcon-180B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **License:** [Falcon-180B TII License](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) and [Acceptable Use Policy](https://huggingface.co/tiiuae/falcon-180B/blob/main/ACCEPTABLE_USE_POLICY.txt).
### Model Source
- **Paper:** *coming soon*.
## Uses
See the [acceptable use policy](https://huggingface.co/tiiuae/falcon-180B/blob/main/ACCEPTABLE_USE_POLICY.txt).
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-180B is trained mostly on English, German, Spanish, French, with limited capabilities also in in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-180B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-180b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-180B was trained on 3,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 75% | 750B | massive web crawl |
| RefinedWeb-Europe | 7% | 70B | European massive web crawl |
| Books | 6% | 60B | |
| Conversations | 5% | 50B | Reddit, StackOverflow, HackerNews |
| Code | 5% | 50B | |
| Technical | 2% | 20B | arXiv, PubMed, USPTO, etc. |
RefinedWeb-Europe is made of the following languages:
| **Language** | **Fraction of multilingual data** | **Tokens** |
|--------------|-----------------------------------|------------|
| German | 26% | 18B |
| Spanish | 24% | 17B |
| French | 23% | 16B |
| _Italian_ | 7% | 5B |
| _Portuguese_ | 4% | 3B |
| _Polish_ | 4% | 3B |
| _Dutch_ | 4% | 3B |
| _Romanian_ | 3% | 2B |
| _Czech_ | 3% | 2B |
| _Swedish_ | 2% | 1B |
The data was tokenized with the Falcon tokenizer.
### Training Procedure
Falcon-180B was trained on up to 4,096 A100 40GB GPUs, using a 3D parallelism strategy (TP=8, PP=8, DP=64) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 1.25e-4 | 4B tokens warm-up, cosine decay to 1.25e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2048 | 100B tokens ramp-up |
#### Speeds, Sizes, Times
Training started in early 2023.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-180B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with two layer norms.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree (so-called multigroup).
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 80 | |
| `d_model` | 14848 | |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-180B was trained on AWS SageMaker, on up to 4,096 A100 40GB GPUs in P4d instances.
#### Software
Falcon-180B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊 (actually this time). In the meanwhile, you can use the following information to cite:
```
@article{falcon,
title={The Falcon Series of Language Models: Towards Open Frontier Models},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Alhammadi, Maitha and Daniele, Mazzotta and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## Contact
falconllm@tii.ae
|
guidoivetta/Peppa-Pig
|
guidoivetta
| 2023-09-08T03:55:51Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-08T03:54:57Z |
---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Peppa-Pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Peppa-Pig
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9432 | 1.0 | 280 | 0.8825 |
| 0.7771 | 2.0 | 560 | 0.8466 |
| 0.6387 | 3.0 | 840 | 0.8293 |
| 0.5383 | 4.0 | 1120 | 0.8249 |
| 0.5661 | 5.0 | 1400 | 0.8259 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DataBindu/swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes
|
DataBindu
| 2023-09-08T03:47:50Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft",
"base_model:finetune:microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-06T19:51:57Z |
---
license: apache-2.0
base_model: microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7129629629629629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to24-192to384-22kto1k-ft-microbes
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to24-192to384-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0311
- Accuracy: 0.7130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8445 | 0.98 | 15 | 2.8535 | 0.3194 |
| 2.1358 | 1.97 | 30 | 1.9654 | 0.4491 |
| 1.5947 | 2.95 | 45 | 1.4172 | 0.6204 |
| 1.045 | 4.0 | 61 | 1.1698 | 0.6806 |
| 0.985 | 4.98 | 76 | 1.1927 | 0.6852 |
| 0.775 | 5.97 | 91 | 1.1012 | 0.6898 |
| 0.7207 | 6.95 | 106 | 1.0311 | 0.7130 |
| 0.6611 | 7.87 | 120 | 1.0311 | 0.6991 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Rebecca19990101/vicuna-7b-instruct-ft-adapters-chemical-v2
|
Rebecca19990101
| 2023-09-08T03:16:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-08T03:16:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
mmnga/cyberagent-open-calm-3b-gguf
|
mmnga
| 2023-09-08T03:09:01Z | 271 | 0 | null |
[
"gguf",
"gpt-neox",
"ja",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-21T10:20:13Z |
---
license: cc-by-sa-4.0
language:
- ja
tags:
- gpt-neox
---
# cyberagent-open-calm-3b-gguf
[cyberagentさんが公開しているopen-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)のggufフォーマット変換版です。
他モデルはこちら
[mmnga/cyberagent-open-calm-7b-gguf](https://huggingface.co/mmnga/cyberagent-open-calm-7b-gguf)
[mmnga/cyberagent-open-calm-3b-gguf](https://huggingface.co/mmnga/cyberagent-open-calm-3b-gguf)
[mmnga/cyberagent-open-calm-1b-gguf](https://huggingface.co/mmnga/cyberagent-open-calm-1b-gguf)
注意:こちらはブランチで試用になります。llama.cpp本家にgptneoxが実装された時に、このggufファイルが使用できない可能性があります。
***[GitHubリポジトリの readme はこちら](https://github.com/mmnga/llama.cpp/tree/mmnga-dev)***
## Usage (試用)
```
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j
./main -m 'cyberagent-open-calm-3b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
```
**CUBLAS**
```
LLAMA_CUBLAS=1 make -j
./main -m 'cyberagent-open-calm-3b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' -ngl 32
```
|
mmnga/cyberagent-open-calm-7b-gguf
|
mmnga
| 2023-09-08T03:08:46Z | 372 | 2 | null |
[
"gguf",
"gpt-neox",
"ja",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-21T09:55:24Z |
---
license: cc-by-sa-4.0
language:
- ja
tags:
- gpt-neox
---
# cyberagent-open-calm-7b-gguf
[cyberagentさんが公開しているopen-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)のggufフォーマット変換版です。
他モデルはこちら
[mmnga/cyberagent-open-calm-7b-gguf](https://huggingface.co/mmnga/cyberagent-open-calm-7b-gguf)
[mmnga/cyberagent-open-calm-3b-gguf](https://huggingface.co/mmnga/cyberagent-open-calm-3b-gguf)
[mmnga/cyberagent-open-calm-1b-gguf](https://huggingface.co/mmnga/cyberagent-open-calm-1b-gguf)
注意:こちらはブランチで試用になります。llama.cpp本家にgptneoxが実装された時に、このggufファイルが使用できない可能性があります。
***[GitHubリポジトリの readme はこちら](https://github.com/mmnga/llama.cpp/tree/mmnga-dev)***
## Usage (試用)
```
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j
./main -m 'cyberagent-open-calm-7b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
```
**CUBLAS**
```
LLAMA_CUBLAS=1 make -j
./main -m 'cyberagent-open-calm-7b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' -ngl 40
```
|
chgenly/q-FrozenLake-v1-4x4-noSlippery
|
chgenly
| 2023-09-08T03:06:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-08T03:06:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="chgenly/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tmnam20/codellama_instruct_spider_e10
|
tmnam20
| 2023-09-08T02:59:22Z | 0 | 0 | null |
[
"generated_from_trainer",
"dataset:tmnam20/SpiderInstruct",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2023-09-05T15:37:12Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
datasets:
- tmnam20/SpiderInstruct
model-index:
- name: codellama_instruct_spider_e10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama_instruct_spider_e10
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the tmnam20/SpiderInstruct dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.822 | 0.37 | 100 | 0.5313 |
| 0.3014 | 0.74 | 200 | 0.2763 |
| 0.2091 | 1.11 | 300 | 0.2469 |
| 0.1697 | 1.48 | 400 | 0.2401 |
| 0.1495 | 1.85 | 500 | 0.2395 |
| 0.1256 | 2.22 | 600 | 0.2525 |
| 0.1097 | 2.59 | 700 | 0.2641 |
| 0.1107 | 2.96 | 800 | 0.2617 |
| 0.0951 | 3.33 | 900 | 0.2683 |
| 0.0882 | 3.7 | 1000 | 0.2892 |
| 0.0818 | 4.06 | 1100 | 0.3134 |
| 0.075 | 4.43 | 1200 | 0.2978 |
| 0.0745 | 4.8 | 1300 | 0.3095 |
| 0.0642 | 5.17 | 1400 | 0.3261 |
| 0.0622 | 5.54 | 1500 | 0.3201 |
| 0.0573 | 5.91 | 1600 | 0.3343 |
| 0.0552 | 6.28 | 1700 | 0.3396 |
| 0.0523 | 6.65 | 1800 | 0.3602 |
| 0.0538 | 7.02 | 1900 | 0.3464 |
| 0.0467 | 7.39 | 2000 | 0.3622 |
| 0.0465 | 7.76 | 2100 | 0.3697 |
| 0.044 | 8.13 | 2200 | 0.3890 |
| 0.043 | 8.5 | 2300 | 0.3785 |
| 0.0375 | 8.87 | 2400 | 0.3860 |
| 0.0384 | 9.24 | 2500 | 0.3952 |
| 0.0363 | 9.61 | 2600 | 0.3940 |
| 0.0352 | 9.98 | 2700 | 0.3985 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
thainq107/bert-base-banking77-pt2
|
thainq107
| 2023-09-08T02:58:57Z | 68 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T16:32:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9311734343722707
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2844
- F1: 0.9312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2215 | 1.0 | 313 | 1.1811 | 0.7646 |
| 0.6252 | 2.0 | 626 | 0.4665 | 0.9120 |
| 0.3323 | 3.0 | 939 | 0.3294 | 0.9281 |
| 0.1446 | 4.0 | 1252 | 0.3051 | 0.9267 |
| 0.0994 | 5.0 | 1565 | 0.2844 | 0.9312 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1
- Datasets 2.9.0
- Tokenizers 0.13.3
|
BiaDd/emotion-model
|
BiaDd
| 2023-09-08T02:56:47Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-08T02:36:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: emotion-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
- name: F1
type: f1
value: 0.9336283314309987
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.9335
- F1: 0.9336
## Model description
Predicts the emotions of provided text

## Intended uses & limitations
For sentiment analysis
## Training and evaluation data
Data from "emotion" dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3058 | 0.9125 | 0.9098 |
| 0.5417 | 2.0 | 500 | 0.2131 | 0.9335 | 0.9336 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mmnga/line-corp-japanese-large-lm-3.6b-gguf
|
mmnga
| 2023-09-08T02:53:05Z | 60 | 0 | null |
[
"gguf",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T18:18:41Z |
---
license: apache-2.0
language:
- ja
---
# line-corporation/japanese-large-lm-3.6b
[line-corporationさんが公開しているjapanese-large-lm-3.6b](https://huggingface.co/line-corporation/japanese-large-lm-3.6b)のgguf変換版です。
他モデルはこちら
GPT-NEOX
[mmnga/line-corp-japanese-large-lm-3.6b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-gguf)
[mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf)
GPT-2
[mmnga/line-corp-japanese-large-lm-1.7b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-gguf)
[mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf)
*注意:こちらはブランチで試用になります。llama.cpp本家にgptneox, gpt2が実装された時に、このggufファイルが使用できない可能性があります。*
***[GitHubリポジトリの readme はこちら](https://github.com/mmnga/llama.cpp/tree/mmnga-dev)***
## Usage (試用)
```
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j
./main -m 'line-corp-japanese-large-lm-3.6b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
```
**CUBLAS**
```
LLAMA_CUBLAS=1 make -j
./main -m 'line-corp-japanese-large-lm-3.6b-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' -ngl 32
```
**従来のCPU実行**
~~~~bash
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j gptneox
./gptneox -m 'line-corp-japanese-large-lm-3.6b-q4_0.gguf' -n 128 -p 'ユーザー: 吾輩って猫ですか? システム: '
~~~~
|
mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf
|
mmnga
| 2023-09-08T02:52:29Z | 89 | 2 | null |
[
"gguf",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T18:01:40Z |
---
license: apache-2.0
language:
- ja
---
# line-corporation/japanese-large-lm-3.6b-instruction-sft
[line-corporationさんが公開しているjapanese-large-lm-3.6b-instruction-sft](https://huggingface.co/line-corporation/japanese-large-lm-3.6b-instruction-sft)のgguf変換版です。
他モデルはこちら
GPT-NEOX
[mmnga/line-corp-japanese-large-lm-3.6b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-gguf)
[mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf)
GPT-2
[mmnga/line-corp-japanese-large-lm-1.7b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-gguf)
[mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf)
*注意:こちらはブランチで試用になります。llama.cpp本家にgptneox, gpt2が実装された時に、このggufファイルが使用できない可能性があります。*
***[GitHubリポジトリの readme はこちら](https://github.com/mmnga/llama.cpp/tree/mmnga-dev)***
## Usage (試用)
```
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j
./main -m 'line-corp-japanese-large-lm-3.6b-instruction-sft-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
```
**CUBLAS**
```
LLAMA_CUBLAS=1 make -j
./main -m 'line-corp-japanese-large-lm-3.6b-instruction-sft-q4_0.gguf' -n 128 -p '吾輩は猫である。名前は実を言うと、' -ngl 32
```
**従来のCPU実行**
~~~~bash
git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j gptneox
./gptneox -m 'line-corp-japanese-large-lm-3.6b-instruction-sft-q4_0.gguf' -n 128 -p 'ユーザー: 吾輩って猫ですか? システム: '
~~~~
|
kasperchen/Reinforce-CartPole-v1
|
kasperchen
| 2023-09-08T02:51:59Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-16T05:30:02Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hw2942/chinese-macbert-base-SSEC
|
hw2942
| 2023-09-08T02:46:03Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:hfl/chinese-macbert-base",
"base_model:finetune:hfl/chinese-macbert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-08T02:39:46Z |
---
license: apache-2.0
base_model: hfl/chinese-macbert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chinese-macbert-base-wallstreetcn-morning-news-market-overview-SSEC-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-macbert-base-wallstreetcn-morning-news-market-overview-SSEC-v6
This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4847
- Accuracy: 0.7188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 34 | 0.6893 | 0.4375 |
| No log | 2.0 | 68 | 0.6156 | 0.6562 |
| No log | 3.0 | 102 | 0.8698 | 0.6562 |
| No log | 4.0 | 136 | 0.6379 | 0.6562 |
| No log | 5.0 | 170 | 0.8517 | 0.7188 |
| No log | 6.0 | 204 | 1.1949 | 0.6875 |
| No log | 7.0 | 238 | 1.2695 | 0.6875 |
| No log | 8.0 | 272 | 1.3954 | 0.7188 |
| No log | 9.0 | 306 | 1.5019 | 0.6875 |
| No log | 10.0 | 340 | 1.4847 | 0.7188 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Onutoa/1_9e-3_5_0.5
|
Onutoa
| 2023-09-08T02:43:57Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T23:43:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_9e-3_5_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_9e-3_5_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8603
- Accuracy: 0.7489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.009
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.7616 | 1.0 | 590 | 2.7583 | 0.3798 |
| 2.2507 | 2.0 | 1180 | 1.8432 | 0.6294 |
| 2.5953 | 3.0 | 1770 | 3.4928 | 0.4532 |
| 2.3305 | 4.0 | 2360 | 1.5737 | 0.6486 |
| 1.9577 | 5.0 | 2950 | 2.6604 | 0.6263 |
| 1.7557 | 6.0 | 3540 | 1.2734 | 0.6761 |
| 1.6227 | 7.0 | 4130 | 3.4140 | 0.5119 |
| 1.4961 | 8.0 | 4720 | 1.2029 | 0.7043 |
| 1.3331 | 9.0 | 5310 | 1.2170 | 0.7092 |
| 1.3007 | 10.0 | 5900 | 1.7625 | 0.6725 |
| 1.2049 | 11.0 | 6490 | 1.0667 | 0.7070 |
| 1.1087 | 12.0 | 7080 | 0.9915 | 0.7156 |
| 1.1023 | 13.0 | 7670 | 1.0683 | 0.6924 |
| 1.0404 | 14.0 | 8260 | 1.1711 | 0.7248 |
| 1.0287 | 15.0 | 8850 | 1.0966 | 0.7297 |
| 0.9405 | 16.0 | 9440 | 0.9352 | 0.7107 |
| 0.8558 | 17.0 | 10030 | 0.9269 | 0.7205 |
| 0.8273 | 18.0 | 10620 | 0.9574 | 0.7235 |
| 0.7798 | 19.0 | 11210 | 0.9598 | 0.7385 |
| 0.7646 | 20.0 | 11800 | 0.9004 | 0.7287 |
| 0.7505 | 21.0 | 12390 | 0.9389 | 0.7174 |
| 0.7273 | 22.0 | 12980 | 0.9234 | 0.7358 |
| 0.6971 | 23.0 | 13570 | 0.9055 | 0.7315 |
| 0.6815 | 24.0 | 14160 | 0.8711 | 0.7352 |
| 0.6729 | 25.0 | 14750 | 1.0923 | 0.7437 |
| 0.6151 | 26.0 | 15340 | 0.8950 | 0.7254 |
| 0.6291 | 27.0 | 15930 | 1.1086 | 0.6945 |
| 0.6243 | 28.0 | 16520 | 0.9179 | 0.7410 |
| 0.609 | 29.0 | 17110 | 1.0778 | 0.7410 |
| 0.5733 | 30.0 | 17700 | 0.9548 | 0.7422 |
| 0.5742 | 31.0 | 18290 | 1.1436 | 0.7413 |
| 0.5675 | 32.0 | 18880 | 0.8956 | 0.7450 |
| 0.5578 | 33.0 | 19470 | 0.9040 | 0.7382 |
| 0.5339 | 34.0 | 20060 | 0.8730 | 0.7453 |
| 0.5284 | 35.0 | 20650 | 1.0258 | 0.7486 |
| 0.5116 | 36.0 | 21240 | 1.2775 | 0.7382 |
| 0.5215 | 37.0 | 21830 | 0.9275 | 0.7477 |
| 0.5038 | 38.0 | 22420 | 0.8780 | 0.7394 |
| 0.5073 | 39.0 | 23010 | 0.9095 | 0.7468 |
| 0.4897 | 40.0 | 23600 | 0.8864 | 0.7410 |
| 0.4927 | 41.0 | 24190 | 1.1312 | 0.7391 |
| 0.4941 | 42.0 | 24780 | 0.8809 | 0.7339 |
| 0.4629 | 43.0 | 25370 | 1.1564 | 0.7419 |
| 0.4754 | 44.0 | 25960 | 0.9223 | 0.7413 |
| 0.457 | 45.0 | 26550 | 0.8677 | 0.7422 |
| 0.4398 | 46.0 | 27140 | 1.0571 | 0.7471 |
| 0.4612 | 47.0 | 27730 | 0.8773 | 0.7401 |
| 0.4464 | 48.0 | 28320 | 0.9260 | 0.7477 |
| 0.4779 | 49.0 | 28910 | 0.8712 | 0.7425 |
| 0.443 | 50.0 | 29500 | 0.8886 | 0.7413 |
| 0.4445 | 51.0 | 30090 | 0.8968 | 0.7431 |
| 0.4274 | 52.0 | 30680 | 0.9516 | 0.7495 |
| 0.4239 | 53.0 | 31270 | 0.8773 | 0.7443 |
| 0.4143 | 54.0 | 31860 | 1.0295 | 0.7401 |
| 0.4359 | 55.0 | 32450 | 0.8879 | 0.7453 |
| 0.4197 | 56.0 | 33040 | 0.8712 | 0.7489 |
| 0.397 | 57.0 | 33630 | 1.0037 | 0.7544 |
| 0.402 | 58.0 | 34220 | 0.8789 | 0.7554 |
| 0.4015 | 59.0 | 34810 | 0.8532 | 0.7523 |
| 0.4008 | 60.0 | 35400 | 0.8840 | 0.7523 |
| 0.3943 | 61.0 | 35990 | 0.9475 | 0.7462 |
| 0.3968 | 62.0 | 36580 | 0.9413 | 0.7465 |
| 0.394 | 63.0 | 37170 | 0.8878 | 0.7480 |
| 0.3914 | 64.0 | 37760 | 0.8737 | 0.7511 |
| 0.3959 | 65.0 | 38350 | 0.8553 | 0.7486 |
| 0.3881 | 66.0 | 38940 | 0.8905 | 0.7495 |
| 0.379 | 67.0 | 39530 | 0.8956 | 0.7489 |
| 0.3821 | 68.0 | 40120 | 0.8711 | 0.7514 |
| 0.3764 | 69.0 | 40710 | 0.9552 | 0.7557 |
| 0.3841 | 70.0 | 41300 | 0.9638 | 0.7523 |
| 0.3758 | 71.0 | 41890 | 0.8728 | 0.7453 |
| 0.376 | 72.0 | 42480 | 0.9654 | 0.7450 |
| 0.364 | 73.0 | 43070 | 1.0121 | 0.7477 |
| 0.3567 | 74.0 | 43660 | 1.0070 | 0.7508 |
| 0.3723 | 75.0 | 44250 | 0.9271 | 0.7508 |
| 0.3673 | 76.0 | 44840 | 0.8824 | 0.7450 |
| 0.3656 | 77.0 | 45430 | 0.8812 | 0.7477 |
| 0.3722 | 78.0 | 46020 | 0.8728 | 0.7502 |
| 0.3719 | 79.0 | 46610 | 0.8551 | 0.7465 |
| 0.3502 | 80.0 | 47200 | 0.8913 | 0.7523 |
| 0.3467 | 81.0 | 47790 | 0.8476 | 0.7489 |
| 0.348 | 82.0 | 48380 | 0.8885 | 0.7517 |
| 0.3498 | 83.0 | 48970 | 0.8690 | 0.7443 |
| 0.3457 | 84.0 | 49560 | 0.8824 | 0.7480 |
| 0.3463 | 85.0 | 50150 | 0.8450 | 0.7453 |
| 0.3465 | 86.0 | 50740 | 0.8760 | 0.7459 |
| 0.3418 | 87.0 | 51330 | 0.8702 | 0.7437 |
| 0.3394 | 88.0 | 51920 | 0.8782 | 0.7434 |
| 0.3371 | 89.0 | 52510 | 0.8950 | 0.7474 |
| 0.3309 | 90.0 | 53100 | 0.8568 | 0.7398 |
| 0.3321 | 91.0 | 53690 | 0.8973 | 0.7495 |
| 0.3385 | 92.0 | 54280 | 0.8401 | 0.7431 |
| 0.3264 | 93.0 | 54870 | 0.8658 | 0.7462 |
| 0.3382 | 94.0 | 55460 | 0.8652 | 0.7483 |
| 0.3279 | 95.0 | 56050 | 0.8785 | 0.7465 |
| 0.3274 | 96.0 | 56640 | 0.8666 | 0.7477 |
| 0.3272 | 97.0 | 57230 | 0.8666 | 0.7489 |
| 0.3147 | 98.0 | 57820 | 0.8641 | 0.7498 |
| 0.3172 | 99.0 | 58410 | 0.8616 | 0.7486 |
| 0.3256 | 100.0 | 59000 | 0.8603 | 0.7489 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
MochaPixel/MagicMix
|
MochaPixel
| 2023-09-08T02:37:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T09:51:55Z |
---
license: creativeml-openrail-m
---
|
fastbond/llama-2-7b-guanaco-viggo-long1
|
fastbond
| 2023-09-08T02:28:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-08T02:28:12Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
osieosie/bloom-mnli-4bit-1b7-bnb-seed65
|
osieosie
| 2023-09-08T02:27:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T06:29:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
pamixsun/swinv2_tiny_for_glaucoma_classification
|
pamixsun
| 2023-09-08T02:23:00Z | 101 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swinv2",
"image-classification",
"vision",
"fundus",
"glaucoma",
"REFUGE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-14T01:30:31Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- fundus
- glaucoma
- REFUGE
widget:
- src: >-
https://huggingface.co/pamixsun/swinv2_tiny_for_glaucoma_classification/resolve/main/example.jpg
example_title: fundus image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model utilizes a Swin Transformer architecture and has undergone supervised fine-tuning on retinal fundus images from the [REFUGE challenge dataset](https://refuge.grand-challenge.org/).
It is specialized in automated analysis of retinal fundus photographs for glaucoma detection.
By extracting hierarchical visual features from input fundus images in a cross-scale manner, the model is able to effectively categorize each image as either glaucoma or non-glaucoma. Extensive experiments demonstrate that this model architecture achieves state-of-the-art performance on the REFUGE benchmark for fundus image-based glaucoma classification.
To obtain optimal predictions, it is recommended to provide this model with standardized retinal fundus photographs captured using typical fundus imaging protocols.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Xu Sun](https://pamixsun.github.io)
- **Shared by:** [Xu Sun](https://pamixsun.github.io)
- **Model type:** Image classification
- **License:** Apache-2.0
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The pretrained model provides glaucoma classification functionality solely based on analysis of retinal fundus images.
You may directly utilize the raw model without modification to categorize fundus images as either glaucoma or non-glaucoma.
This model is specialized in extracting discriminative features from fundus images to identify glaucoma manifestations.
However, to achieve optimal performance, it is highly recommended to fine-tune the model on a representative fundus image dataset prior to deployment in real-world applications.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is specialized in analyzing retinal fundus images, and is trained exclusively on fundus image datasets to classify images as glaucoma or non-glaucoma.
Therefore, to obtain accurate predictions, it is crucial to only input fundus images when using this model.
Feeding other types of images may lead to meaningless results.
In summary, this model expects fundus images as input for glaucoma classification.
For the best performance, please adhere strictly to this input specification.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import cv2
import torch
from transformers import AutoImageProcessor, Swinv2ForImageClassification
image = cv2.imread('./example.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
processor = AutoImageProcessor.from_pretrained("pamixsun/swinv2_tiny_for_glaucoma_classification")
model = Swinv2ForImageClassification.from_pretrained("pamixsun/swinv2_tiny_for_glaucoma_classification")
inputs = processor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts either glaucoma or non-glaucoma.
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Model Card Contact
- pamixsun@gmail.com
|
pypy/blip2-opt-2.7b-pokemon
|
pypy
| 2023-09-08T02:22:03Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-08T02:21:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
wangrongsheng/Generate-News-Title-7b-chat
|
wangrongsheng
| 2023-09-08T01:57:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-08T01:56:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
wangrongsheng/Generate-News-Abstract-7b-chat
|
wangrongsheng
| 2023-09-08T01:50:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T11:08:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Onutoa/1_7e-3_10_0.5
|
Onutoa
| 2023-09-08T01:42:08Z | 39 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T22:41:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_7e-3_10_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_7e-3_10_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9382
- Accuracy: 0.7557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.007
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.7912 | 1.0 | 590 | 2.5545 | 0.3872 |
| 3.233 | 2.0 | 1180 | 2.8480 | 0.6217 |
| 2.7249 | 3.0 | 1770 | 2.7584 | 0.4037 |
| 2.5026 | 4.0 | 2360 | 1.8755 | 0.6113 |
| 2.235 | 5.0 | 2950 | 1.6668 | 0.6661 |
| 1.9303 | 6.0 | 3540 | 1.6441 | 0.6346 |
| 1.9491 | 7.0 | 4130 | 2.1352 | 0.5789 |
| 1.6294 | 8.0 | 4720 | 2.2811 | 0.6572 |
| 1.6591 | 9.0 | 5310 | 1.5834 | 0.6896 |
| 1.5251 | 10.0 | 5900 | 1.7600 | 0.6716 |
| 1.5112 | 11.0 | 6490 | 1.2400 | 0.6905 |
| 1.3972 | 12.0 | 7080 | 1.2023 | 0.7165 |
| 1.3804 | 13.0 | 7670 | 1.1972 | 0.7009 |
| 1.3085 | 14.0 | 8260 | 1.6154 | 0.7101 |
| 1.2559 | 15.0 | 8850 | 1.1741 | 0.7 |
| 1.2292 | 16.0 | 9440 | 1.1551 | 0.7028 |
| 1.1711 | 17.0 | 10030 | 1.9400 | 0.6242 |
| 1.1356 | 18.0 | 10620 | 1.1234 | 0.7165 |
| 1.0466 | 19.0 | 11210 | 1.0939 | 0.7312 |
| 1.1043 | 20.0 | 11800 | 1.2564 | 0.7183 |
| 0.9875 | 21.0 | 12390 | 1.1273 | 0.7135 |
| 0.9788 | 22.0 | 12980 | 1.0513 | 0.7187 |
| 0.9086 | 23.0 | 13570 | 1.0497 | 0.7312 |
| 0.9327 | 24.0 | 14160 | 1.1127 | 0.7046 |
| 0.8835 | 25.0 | 14750 | 1.3732 | 0.7235 |
| 0.8652 | 26.0 | 15340 | 1.6447 | 0.6511 |
| 0.843 | 27.0 | 15930 | 1.1686 | 0.7425 |
| 0.8072 | 28.0 | 16520 | 1.0110 | 0.7446 |
| 0.7735 | 29.0 | 17110 | 1.1610 | 0.7401 |
| 0.7717 | 30.0 | 17700 | 0.9851 | 0.7352 |
| 0.7746 | 31.0 | 18290 | 1.4960 | 0.7223 |
| 0.7439 | 32.0 | 18880 | 0.9772 | 0.7358 |
| 0.7534 | 33.0 | 19470 | 1.0034 | 0.7456 |
| 0.6874 | 34.0 | 20060 | 0.9894 | 0.7407 |
| 0.6877 | 35.0 | 20650 | 1.4460 | 0.6771 |
| 0.6816 | 36.0 | 21240 | 1.0221 | 0.7489 |
| 0.7158 | 37.0 | 21830 | 1.3579 | 0.7425 |
| 0.6694 | 38.0 | 22420 | 1.1472 | 0.7517 |
| 0.6586 | 39.0 | 23010 | 1.0499 | 0.7523 |
| 0.6418 | 40.0 | 23600 | 1.0344 | 0.7459 |
| 0.6366 | 41.0 | 24190 | 1.2582 | 0.7422 |
| 0.6289 | 42.0 | 24780 | 0.9833 | 0.7370 |
| 0.6065 | 43.0 | 25370 | 1.0209 | 0.7529 |
| 0.6053 | 44.0 | 25960 | 1.0147 | 0.7287 |
| 0.5958 | 45.0 | 26550 | 0.9454 | 0.7456 |
| 0.5637 | 46.0 | 27140 | 0.9789 | 0.7535 |
| 0.5818 | 47.0 | 27730 | 1.0014 | 0.7529 |
| 0.5743 | 48.0 | 28320 | 0.9380 | 0.7526 |
| 0.592 | 49.0 | 28910 | 0.9494 | 0.7385 |
| 0.5591 | 50.0 | 29500 | 0.9728 | 0.7523 |
| 0.5431 | 51.0 | 30090 | 0.9528 | 0.7502 |
| 0.5537 | 52.0 | 30680 | 0.9995 | 0.7410 |
| 0.5444 | 53.0 | 31270 | 0.9815 | 0.7538 |
| 0.5372 | 54.0 | 31860 | 0.9556 | 0.7517 |
| 0.5491 | 55.0 | 32450 | 0.9824 | 0.7459 |
| 0.5294 | 56.0 | 33040 | 0.9625 | 0.7391 |
| 0.5074 | 57.0 | 33630 | 0.9761 | 0.7538 |
| 0.5127 | 58.0 | 34220 | 1.1065 | 0.7587 |
| 0.5095 | 59.0 | 34810 | 0.9373 | 0.7434 |
| 0.5079 | 60.0 | 35400 | 0.9822 | 0.7532 |
| 0.4886 | 61.0 | 35990 | 1.0654 | 0.7627 |
| 0.5143 | 62.0 | 36580 | 0.9688 | 0.7520 |
| 0.4822 | 63.0 | 37170 | 0.9816 | 0.7373 |
| 0.4956 | 64.0 | 37760 | 0.9746 | 0.7477 |
| 0.4953 | 65.0 | 38350 | 0.9493 | 0.7544 |
| 0.4794 | 66.0 | 38940 | 1.0795 | 0.7532 |
| 0.4794 | 67.0 | 39530 | 0.9915 | 0.7575 |
| 0.48 | 68.0 | 40120 | 0.9385 | 0.7498 |
| 0.4633 | 69.0 | 40710 | 1.0949 | 0.7526 |
| 0.4749 | 70.0 | 41300 | 1.0207 | 0.7557 |
| 0.4657 | 71.0 | 41890 | 0.9383 | 0.7428 |
| 0.465 | 72.0 | 42480 | 1.0948 | 0.7581 |
| 0.4558 | 73.0 | 43070 | 0.9506 | 0.7492 |
| 0.4516 | 74.0 | 43660 | 1.0518 | 0.7606 |
| 0.4577 | 75.0 | 44250 | 1.0124 | 0.7575 |
| 0.4642 | 76.0 | 44840 | 0.9293 | 0.7526 |
| 0.4497 | 77.0 | 45430 | 0.9862 | 0.7541 |
| 0.4614 | 78.0 | 46020 | 0.9403 | 0.7566 |
| 0.4442 | 79.0 | 46610 | 0.9599 | 0.7581 |
| 0.4483 | 80.0 | 47200 | 0.9766 | 0.7593 |
| 0.4223 | 81.0 | 47790 | 0.9297 | 0.7547 |
| 0.4416 | 82.0 | 48380 | 0.9614 | 0.7587 |
| 0.4279 | 83.0 | 48970 | 0.9403 | 0.7587 |
| 0.4159 | 84.0 | 49560 | 1.0827 | 0.7569 |
| 0.4319 | 85.0 | 50150 | 0.9250 | 0.7505 |
| 0.427 | 86.0 | 50740 | 0.9475 | 0.7517 |
| 0.427 | 87.0 | 51330 | 0.9429 | 0.7523 |
| 0.4233 | 88.0 | 51920 | 0.9721 | 0.7581 |
| 0.4167 | 89.0 | 52510 | 0.9387 | 0.7557 |
| 0.4162 | 90.0 | 53100 | 0.9282 | 0.7544 |
| 0.4163 | 91.0 | 53690 | 0.9785 | 0.7566 |
| 0.4214 | 92.0 | 54280 | 0.9217 | 0.7517 |
| 0.4038 | 93.0 | 54870 | 0.9470 | 0.7584 |
| 0.4258 | 94.0 | 55460 | 0.9254 | 0.7550 |
| 0.4206 | 95.0 | 56050 | 0.9380 | 0.7569 |
| 0.4086 | 96.0 | 56640 | 0.9379 | 0.7578 |
| 0.3973 | 97.0 | 57230 | 0.9425 | 0.7557 |
| 0.3971 | 98.0 | 57820 | 0.9461 | 0.7572 |
| 0.3899 | 99.0 | 58410 | 0.9388 | 0.7557 |
| 0.4033 | 100.0 | 59000 | 0.9382 | 0.7557 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Onutoa/1_5e-3_10_0.5
|
Onutoa
| 2023-09-08T01:33:04Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T22:33:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_5e-3_10_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_5e-3_10_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9119
- Accuracy: 0.7446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.6814 | 1.0 | 590 | 2.2524 | 0.6128 |
| 2.6474 | 2.0 | 1180 | 2.2889 | 0.6217 |
| 2.7373 | 3.0 | 1770 | 3.8911 | 0.4401 |
| 2.7048 | 4.0 | 2360 | 2.6859 | 0.6214 |
| 2.3193 | 5.0 | 2950 | 3.0408 | 0.6217 |
| 2.0191 | 6.0 | 3540 | 2.0926 | 0.5706 |
| 1.9595 | 7.0 | 4130 | 1.7082 | 0.6908 |
| 1.833 | 8.0 | 4720 | 1.7816 | 0.6092 |
| 1.7395 | 9.0 | 5310 | 1.6251 | 0.6281 |
| 1.7038 | 10.0 | 5900 | 2.6889 | 0.6554 |
| 1.7975 | 11.0 | 6490 | 1.5326 | 0.6994 |
| 1.5534 | 12.0 | 7080 | 2.6513 | 0.5554 |
| 1.5833 | 13.0 | 7670 | 1.5617 | 0.6410 |
| 1.4585 | 14.0 | 8260 | 1.8289 | 0.6171 |
| 1.4375 | 15.0 | 8850 | 1.6306 | 0.6517 |
| 1.3418 | 16.0 | 9440 | 1.2628 | 0.7153 |
| 1.2576 | 17.0 | 10030 | 1.4116 | 0.7098 |
| 1.2068 | 18.0 | 10620 | 1.1643 | 0.7089 |
| 1.1781 | 19.0 | 11210 | 1.4702 | 0.7083 |
| 1.1497 | 20.0 | 11800 | 1.1550 | 0.6988 |
| 1.0552 | 21.0 | 12390 | 1.0861 | 0.7284 |
| 1.047 | 22.0 | 12980 | 1.0821 | 0.7205 |
| 1.0036 | 23.0 | 13570 | 1.1193 | 0.7193 |
| 0.9589 | 24.0 | 14160 | 1.3591 | 0.7135 |
| 0.9604 | 25.0 | 14750 | 1.0030 | 0.7229 |
| 0.9283 | 26.0 | 15340 | 1.1469 | 0.7031 |
| 0.9242 | 27.0 | 15930 | 1.0466 | 0.7318 |
| 0.8703 | 28.0 | 16520 | 1.0736 | 0.7343 |
| 0.858 | 29.0 | 17110 | 1.0357 | 0.7183 |
| 0.8267 | 30.0 | 17700 | 0.9936 | 0.7339 |
| 0.8148 | 31.0 | 18290 | 0.9989 | 0.7321 |
| 0.7981 | 32.0 | 18880 | 1.0559 | 0.7404 |
| 0.7956 | 33.0 | 19470 | 1.0207 | 0.7217 |
| 0.7817 | 34.0 | 20060 | 0.9636 | 0.7361 |
| 0.7545 | 35.0 | 20650 | 0.9415 | 0.7324 |
| 0.7372 | 36.0 | 21240 | 1.0793 | 0.7413 |
| 0.7317 | 37.0 | 21830 | 1.2911 | 0.7315 |
| 0.7411 | 38.0 | 22420 | 0.9517 | 0.7364 |
| 0.7093 | 39.0 | 23010 | 1.0133 | 0.7382 |
| 0.6838 | 40.0 | 23600 | 1.1835 | 0.7401 |
| 0.6773 | 41.0 | 24190 | 0.9180 | 0.7379 |
| 0.6776 | 42.0 | 24780 | 0.9410 | 0.7367 |
| 0.6486 | 43.0 | 25370 | 0.9836 | 0.7419 |
| 0.6527 | 44.0 | 25960 | 0.9721 | 0.7309 |
| 0.6465 | 45.0 | 26550 | 0.9508 | 0.7388 |
| 0.6245 | 46.0 | 27140 | 0.9273 | 0.7434 |
| 0.6258 | 47.0 | 27730 | 0.9763 | 0.7330 |
| 0.6086 | 48.0 | 28320 | 0.9135 | 0.7388 |
| 0.6417 | 49.0 | 28910 | 1.0037 | 0.7446 |
| 0.6064 | 50.0 | 29500 | 0.9751 | 0.7398 |
| 0.5938 | 51.0 | 30090 | 0.9801 | 0.7453 |
| 0.5951 | 52.0 | 30680 | 0.9515 | 0.7370 |
| 0.5718 | 53.0 | 31270 | 0.9160 | 0.7419 |
| 0.5751 | 54.0 | 31860 | 0.9263 | 0.7462 |
| 0.5839 | 55.0 | 32450 | 0.9170 | 0.7376 |
| 0.5707 | 56.0 | 33040 | 0.9787 | 0.7431 |
| 0.564 | 57.0 | 33630 | 0.9822 | 0.7431 |
| 0.5539 | 58.0 | 34220 | 0.9335 | 0.7407 |
| 0.5567 | 59.0 | 34810 | 1.0004 | 0.7370 |
| 0.5555 | 60.0 | 35400 | 0.9554 | 0.7446 |
| 0.5344 | 61.0 | 35990 | 0.9199 | 0.7483 |
| 0.5494 | 62.0 | 36580 | 0.9970 | 0.7456 |
| 0.5226 | 63.0 | 37170 | 0.9454 | 0.7434 |
| 0.5275 | 64.0 | 37760 | 0.9771 | 0.7361 |
| 0.5186 | 65.0 | 38350 | 1.0032 | 0.7517 |
| 0.52 | 66.0 | 38940 | 0.9263 | 0.7440 |
| 0.5209 | 67.0 | 39530 | 1.0130 | 0.7443 |
| 0.528 | 68.0 | 40120 | 0.9466 | 0.7422 |
| 0.5146 | 69.0 | 40710 | 0.9790 | 0.7456 |
| 0.5026 | 70.0 | 41300 | 0.9880 | 0.7489 |
| 0.5204 | 71.0 | 41890 | 0.9132 | 0.7373 |
| 0.5049 | 72.0 | 42480 | 0.9589 | 0.7480 |
| 0.4969 | 73.0 | 43070 | 0.9564 | 0.7446 |
| 0.4911 | 74.0 | 43660 | 0.9255 | 0.7336 |
| 0.4961 | 75.0 | 44250 | 0.9983 | 0.7502 |
| 0.4986 | 76.0 | 44840 | 0.9003 | 0.7376 |
| 0.4979 | 77.0 | 45430 | 0.8937 | 0.7385 |
| 0.4941 | 78.0 | 46020 | 0.9082 | 0.7422 |
| 0.487 | 79.0 | 46610 | 0.9231 | 0.7471 |
| 0.4773 | 80.0 | 47200 | 0.9673 | 0.7437 |
| 0.4665 | 81.0 | 47790 | 0.9598 | 0.7462 |
| 0.4824 | 82.0 | 48380 | 0.9110 | 0.7410 |
| 0.4795 | 83.0 | 48970 | 0.9222 | 0.7425 |
| 0.4654 | 84.0 | 49560 | 0.9369 | 0.7459 |
| 0.4605 | 85.0 | 50150 | 0.9379 | 0.7502 |
| 0.477 | 86.0 | 50740 | 0.8911 | 0.7437 |
| 0.4644 | 87.0 | 51330 | 0.9287 | 0.7434 |
| 0.4539 | 88.0 | 51920 | 0.9421 | 0.7422 |
| 0.4582 | 89.0 | 52510 | 0.9248 | 0.7437 |
| 0.4488 | 90.0 | 53100 | 0.9152 | 0.7425 |
| 0.4554 | 91.0 | 53690 | 0.9511 | 0.7471 |
| 0.4547 | 92.0 | 54280 | 0.9064 | 0.7419 |
| 0.4534 | 93.0 | 54870 | 0.9404 | 0.7471 |
| 0.463 | 94.0 | 55460 | 0.9346 | 0.7453 |
| 0.4482 | 95.0 | 56050 | 0.9191 | 0.7437 |
| 0.4518 | 96.0 | 56640 | 0.9154 | 0.7431 |
| 0.4326 | 97.0 | 57230 | 0.9055 | 0.7440 |
| 0.4291 | 98.0 | 57820 | 0.9072 | 0.7437 |
| 0.4278 | 99.0 | 58410 | 0.9101 | 0.7437 |
| 0.4397 | 100.0 | 59000 | 0.9119 | 0.7446 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CyberHarem/caren_hortensia_fatekaleidlinerprismaillya
|
CyberHarem
| 2023-09-08T01:23:45Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/caren_hortensia_fatekaleidlinerprismaillya",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-08T01:15:34Z |
---
license: mit
datasets:
- CyberHarem/caren_hortensia_fatekaleidlinerprismaillya
pipeline_tag: text-to-image
tags:
- art
---
# Lora of caren_hortensia_fatekaleidlinerprismaillya
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4080, you need to download `4080/caren_hortensia_fatekaleidlinerprismaillya.pt` as the embedding and `4080/caren_hortensia_fatekaleidlinerprismaillya.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4080**, with the score of 0.929. The trigger words are:
1. `caren_hortensia_fatekaleidlinerprismaillya`
2. `long_hair, white_hair, yellow_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.922 | [Download](5100/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.916 | [Download](4760/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.919 | [Download](4420/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| **4080** | **0.929** | [**Download**](4080/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.912 | [Download](3740/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.924 | [Download](3400/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.906 | [Download](3060/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.876 | [Download](2720/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.891 | [Download](2380/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.895 | [Download](2040/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.872 | [Download](1700/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.783 | [Download](1360/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.704 | [Download](1020/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.675 | [Download](680/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.507 | [Download](340/caren_hortensia_fatekaleidlinerprismaillya.zip) |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
johaanm/test-planner-alpha-V7.4
|
johaanm
| 2023-09-08T01:16:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-08T01:16:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
mespinosami/synthetic-cloud-removal-sd-1_5
|
mespinosami
| 2023-09-08T01:10:35Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-07T10:15:39Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-mespinosami/synthetic-cloud-removal-sd-1_5
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: cloudless satellite image, remove all clouds from satellite image, no clouds

prompt: cloudless satellite image, remove all clouds from satellite image, no clouds

prompt: cloudless satellite image, remove all clouds from satellite image, no clouds

|
shengqin/bloomz-xss-sqli-2
|
shengqin
| 2023-09-08T00:48:12Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T21:41:19Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
daochf/Lora-MetaLlama2-7b-chat-hf-PuceDs03-v01
|
daochf
| 2023-09-07T23:23:43Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T23:23:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
facebook/mms-cclms
|
facebook
| 2023-09-07T23:20:56Z | 0 | 0 | null |
[
"mms",
"arxiv:2207.04672",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-06-06T18:48:58Z |
---
license: cc-by-nc-4.0
tags:
- mms
---
# Massively Multilingual Speech (MMS) - Common Crawl Language Models
This repository consists of the n-gram language models trained on Common Crawl data ([Conneau et al. 2020b](https://aclanthology.org/2020.acl-main.747/), [NLLB_Team et al. 2022](https://arxiv.org/abs/2207.04672)) using [KenLM library](https://github.com/kpu/kenlm).
For the following languages, the LMs are not present in the repository (due to 50GB limit on HuggingFace) and can be downloaded using the link provided here.
Mandarin Chinese (Simplified) - [Download LM](https://dl.fbaipublicfiles.com/mms/lms/cmn-script_simplified/char_20gram.bin)
Japanese - [Download LM](https://dl.fbaipublicfiles.com/mms/lms/jpn/char_20gram.bin)
Thai - [Download LM](https://dl.fbaipublicfiles.com/mms/lms/tha/char_20gram.bin)
Cantonese(Traditional) - [Download LM](https://dl.fbaipublicfiles.com/mms/lms/yue-script_traditional/char_20gram.bin)
## Table Of Content
- [Example](#example)
- [Supported Languages](#supported-languages)
- [Model details](#model-details)
- [Additional links](#additional-links)
## Example
Checkout the code here - https://huggingface.co/spaces/mms-meta/MMS/blob/main/asr.py which uses LMs for decoding the output from ASR models.
## Supported Languages
We support language models in 102 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3).
You can find more details about the languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
<details>
<summary>Click to toggle</summary>
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- bul
- cat
- ceb
- ces
- ckb
- cmn
- cym
- dan
- deu
- ell
- eng
- est
- fas
- fin
- fra
- ful
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ibo
- ind
- isl
- ita
- jav
- jpn
- kam
- kan
- kat
- kaz
- kea
- khm
- kir
- kor
- lao
- lav
- lin
- lit
- ltz
- lug
- luo
- mal
- mar
- mkd
- mlt
- mon
- mri
- mya
- nld
- nob
- npi
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- por
- pus
- ron
- rus
- slk
- slv
- sna
- snd
- som
- spa
- srp
- swe
- swh
- tam
- tel
- tgk
- tgl
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zlm
- zul
</details>
## Model details
- **Developed by:** Vineel Pratap et al.
- **Model type:** Multi-Lingual Automatic Speech Recognition model
- **Language(s):** 126 languages, see [supported languages](#supported-languages)
- **License:** CC-BY-NC 4.0 license
- **Num parameters**: 1 billion
- **Audio sampling rate**: 16,000 kHz
- **Cite as:**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
## Additional Links
- [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/)
- [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
- [Paper](https://arxiv.org/abs/2305.13516)
- [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr)
- [Other **MMS** checkpoints](https://huggingface.co/models?other=mms)
- MMS base checkpoints:
- [facebook/mms-1b](https://huggingface.co/facebook/mms-1b)
- [facebook/mms-300m](https://huggingface.co/facebook/mms-300m)
- [Official Space](https://huggingface.co/spaces/facebook/MMS)
|
JohnyQuest/endo_llama2
|
JohnyQuest
| 2023-09-07T23:13:13Z | 0 | 0 | null |
[
"license:llama2",
"region:us"
] | null | 2023-09-07T23:12:21Z |
---
license: llama2
---
Endocrinology question answer finetuned LLAMA2.
Version 1 - tuned only on hypothyroidism
|
federicochiarello/ppo-LunarLander-v2
|
federicochiarello
| 2023-09-07T23:12:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T23:12:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.84 +/- 21.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sarwarbeing/wm-04-aws-contrastive-learning
|
sarwarbeing
| 2023-09-07T23:03:12Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-07T20:21:44Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# sarwarbeing/wm-04-aws-contrastive-learning
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sarwarbeing/wm-04-aws-contrastive-learning")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
OmkarB/SQL-GQL-Finetuned-Instruct-Tune
|
OmkarB
| 2023-09-07T23:00:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T21:26:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
badhorse666/lunar-lander
|
badhorse666
| 2023-09-07T22:54:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T22:52:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.58 +/- 19.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Echiguerkh/rinna-AraBert-qa-ar3
|
Echiguerkh
| 2023-09-07T22:46:59Z | 72 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:arcd",
"base_model:aubmindlab/bert-base-arabertv2",
"base_model:finetune:aubmindlab/bert-base-arabertv2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-07T22:11:43Z |
---
base_model: aubmindlab/bert-base-arabertv2
tags:
- generated_from_trainer
datasets:
- arcd
model-index:
- name: rinna-AraBert-qa-ar3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rinna-AraBert-qa-ar3
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on the arcd dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0074 | 6.88 | 150 | 3.1727 |
| 1.3286 | 13.75 | 300 | 3.2007 |
| 0.7605 | 20.63 | 450 | 3.5414 |
| 0.5722 | 27.51 | 600 | 3.7678 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Onutoa/1_5e-3_5_0.5
|
Onutoa
| 2023-09-07T22:33:20Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T19:34:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_5e-3_5_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_5e-3_5_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9516
- Accuracy: 0.7450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.4372 | 1.0 | 590 | 1.8593 | 0.6177 |
| 2.3953 | 2.0 | 1180 | 3.6910 | 0.3786 |
| 2.3694 | 3.0 | 1770 | 2.1033 | 0.4694 |
| 2.0494 | 4.0 | 2360 | 1.7694 | 0.6006 |
| 2.034 | 5.0 | 2950 | 1.7949 | 0.6355 |
| 1.8146 | 6.0 | 3540 | 1.7374 | 0.6159 |
| 1.896 | 7.0 | 4130 | 1.8850 | 0.5624 |
| 1.7794 | 8.0 | 4720 | 2.8405 | 0.6245 |
| 1.8298 | 9.0 | 5310 | 2.6985 | 0.4349 |
| 1.7892 | 10.0 | 5900 | 2.2049 | 0.6352 |
| 1.6916 | 11.0 | 6490 | 1.6606 | 0.6272 |
| 1.6384 | 12.0 | 7080 | 1.5955 | 0.6394 |
| 1.6382 | 13.0 | 7670 | 1.6722 | 0.6596 |
| 1.6078 | 14.0 | 8260 | 1.4874 | 0.6587 |
| 1.5373 | 15.0 | 8850 | 1.4382 | 0.6642 |
| 1.4655 | 16.0 | 9440 | 1.4120 | 0.6700 |
| 1.4354 | 17.0 | 10030 | 2.0067 | 0.6532 |
| 1.4021 | 18.0 | 10620 | 1.7860 | 0.5875 |
| 1.3537 | 19.0 | 11210 | 1.4043 | 0.6853 |
| 1.3638 | 20.0 | 11800 | 1.3726 | 0.6875 |
| 1.3061 | 21.0 | 12390 | 1.3332 | 0.6740 |
| 1.3052 | 22.0 | 12980 | 1.2831 | 0.6939 |
| 1.4056 | 23.0 | 13570 | 1.4235 | 0.6835 |
| 1.3389 | 24.0 | 14160 | 1.5395 | 0.6817 |
| 1.2294 | 25.0 | 14750 | 1.2364 | 0.6994 |
| 1.2213 | 26.0 | 15340 | 1.1806 | 0.7012 |
| 1.203 | 27.0 | 15930 | 1.3771 | 0.6538 |
| 1.1667 | 28.0 | 16520 | 1.3193 | 0.6820 |
| 1.1516 | 29.0 | 17110 | 1.3490 | 0.6621 |
| 1.1657 | 30.0 | 17700 | 1.1866 | 0.7015 |
| 1.1212 | 31.0 | 18290 | 1.2403 | 0.6991 |
| 1.0632 | 32.0 | 18880 | 1.1608 | 0.7138 |
| 1.0702 | 33.0 | 19470 | 1.3606 | 0.6642 |
| 1.0609 | 34.0 | 20060 | 1.1448 | 0.6972 |
| 1.0407 | 35.0 | 20650 | 1.2761 | 0.6838 |
| 1.0151 | 36.0 | 21240 | 2.0245 | 0.6862 |
| 1.0246 | 37.0 | 21830 | 1.0999 | 0.7012 |
| 0.9971 | 38.0 | 22420 | 1.1661 | 0.6997 |
| 0.9732 | 39.0 | 23010 | 1.1978 | 0.7187 |
| 0.9642 | 40.0 | 23600 | 1.0760 | 0.7245 |
| 0.9628 | 41.0 | 24190 | 1.2119 | 0.7223 |
| 0.9605 | 42.0 | 24780 | 1.0589 | 0.7245 |
| 0.9297 | 43.0 | 25370 | 1.0496 | 0.7297 |
| 0.9282 | 44.0 | 25960 | 1.0384 | 0.7324 |
| 0.8927 | 45.0 | 26550 | 1.0954 | 0.7284 |
| 0.8753 | 46.0 | 27140 | 1.0344 | 0.7343 |
| 0.8787 | 47.0 | 27730 | 1.0238 | 0.7162 |
| 0.8397 | 48.0 | 28320 | 1.0650 | 0.7162 |
| 0.9109 | 49.0 | 28910 | 1.0901 | 0.7297 |
| 0.8609 | 50.0 | 29500 | 1.0152 | 0.7300 |
| 0.823 | 51.0 | 30090 | 1.1109 | 0.7128 |
| 0.8029 | 52.0 | 30680 | 1.0899 | 0.7113 |
| 0.8142 | 53.0 | 31270 | 1.0185 | 0.7339 |
| 0.7967 | 54.0 | 31860 | 0.9917 | 0.7336 |
| 0.7919 | 55.0 | 32450 | 1.0096 | 0.7352 |
| 0.7883 | 56.0 | 33040 | 1.0033 | 0.7355 |
| 0.7794 | 57.0 | 33630 | 1.0478 | 0.7336 |
| 0.7444 | 58.0 | 34220 | 1.0485 | 0.7284 |
| 0.7646 | 59.0 | 34810 | 1.0046 | 0.7242 |
| 0.7493 | 60.0 | 35400 | 0.9997 | 0.7300 |
| 0.7126 | 61.0 | 35990 | 0.9838 | 0.7398 |
| 0.7303 | 62.0 | 36580 | 0.9983 | 0.7300 |
| 0.7184 | 63.0 | 37170 | 1.1151 | 0.7156 |
| 0.711 | 64.0 | 37760 | 1.0758 | 0.7220 |
| 0.6963 | 65.0 | 38350 | 0.9884 | 0.7281 |
| 0.6972 | 66.0 | 38940 | 0.9688 | 0.7336 |
| 0.6927 | 67.0 | 39530 | 0.9794 | 0.7339 |
| 0.6923 | 68.0 | 40120 | 0.9681 | 0.7379 |
| 0.6829 | 69.0 | 40710 | 1.0167 | 0.7440 |
| 0.6705 | 70.0 | 41300 | 0.9709 | 0.7358 |
| 0.6717 | 71.0 | 41890 | 1.0276 | 0.7226 |
| 0.6683 | 72.0 | 42480 | 0.9858 | 0.7324 |
| 0.6405 | 73.0 | 43070 | 0.9954 | 0.7336 |
| 0.6423 | 74.0 | 43660 | 0.9730 | 0.7339 |
| 0.6628 | 75.0 | 44250 | 1.0100 | 0.7388 |
| 0.6528 | 76.0 | 44840 | 0.9663 | 0.7398 |
| 0.6327 | 77.0 | 45430 | 0.9619 | 0.7358 |
| 0.6434 | 78.0 | 46020 | 0.9671 | 0.7361 |
| 0.6261 | 79.0 | 46610 | 0.9778 | 0.7248 |
| 0.6312 | 80.0 | 47200 | 0.9802 | 0.7343 |
| 0.6098 | 81.0 | 47790 | 0.9736 | 0.7431 |
| 0.6221 | 82.0 | 48380 | 0.9820 | 0.7330 |
| 0.6166 | 83.0 | 48970 | 0.9587 | 0.7431 |
| 0.6072 | 84.0 | 49560 | 0.9671 | 0.7370 |
| 0.5986 | 85.0 | 50150 | 0.9629 | 0.7385 |
| 0.5959 | 86.0 | 50740 | 0.9576 | 0.7407 |
| 0.5858 | 87.0 | 51330 | 0.9793 | 0.7428 |
| 0.5846 | 88.0 | 51920 | 0.9722 | 0.7404 |
| 0.5879 | 89.0 | 52510 | 0.9822 | 0.7394 |
| 0.582 | 90.0 | 53100 | 0.9625 | 0.7422 |
| 0.5805 | 91.0 | 53690 | 0.9856 | 0.7443 |
| 0.5767 | 92.0 | 54280 | 0.9560 | 0.7404 |
| 0.5711 | 93.0 | 54870 | 0.9629 | 0.7440 |
| 0.5769 | 94.0 | 55460 | 0.9560 | 0.7431 |
| 0.557 | 95.0 | 56050 | 0.9562 | 0.7434 |
| 0.5706 | 96.0 | 56640 | 0.9565 | 0.7440 |
| 0.5691 | 97.0 | 57230 | 0.9515 | 0.7425 |
| 0.5496 | 98.0 | 57820 | 0.9570 | 0.7410 |
| 0.5643 | 99.0 | 58410 | 0.9512 | 0.7434 |
| 0.5539 | 100.0 | 59000 | 0.9516 | 0.7450 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CyberHarem/gakumazawa_tatsuko_fatekaleidlinerprismaillya
|
CyberHarem
| 2023-09-07T22:32:40Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/gakumazawa_tatsuko_fatekaleidlinerprismaillya",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T22:23:39Z |
---
license: mit
datasets:
- CyberHarem/gakumazawa_tatsuko_fatekaleidlinerprismaillya
pipeline_tag: text-to-image
tags:
- art
---
# Lora of gakumazawa_tatsuko_fatekaleidlinerprismaillya
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3740, you need to download `3740/gakumazawa_tatsuko_fatekaleidlinerprismaillya.pt` as the embedding and `3740/gakumazawa_tatsuko_fatekaleidlinerprismaillya.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3740**, with the score of 0.577. The trigger words are:
1. `gakumazawa_tatsuko_fatekaleidlinerprismaillya`
2. `hair_bun, blonde_hair, double_bun, ahoge, short_hair, open_mouth, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.560 | [Download](5100/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.535 | [Download](4760/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.550 | [Download](4420/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.572 | [Download](4080/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| **3740** | **0.577** | [**Download**](3740/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.549 | [Download](3400/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.555 | [Download](3060/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.550 | [Download](2720/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.441 | [Download](2380/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.476 | [Download](2040/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.480 | [Download](1700/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.488 | [Download](1360/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.314 | [Download](1020/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.323 | [Download](680/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.211 | [Download](340/gakumazawa_tatsuko_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
MajorBehrad/pixelcopter
|
MajorBehrad
| 2023-09-07T22:30:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T22:30:32Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 14.00 +/- 19.38
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CTFanon/ctf_lora_v9
|
CTFanon
| 2023-09-07T22:30:03Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-08-25T22:23:12Z |
# CTF LoRA
This LoRA lets you generate cock transformation images.

The metadata in the above picture contains an example prompt.
# About
This LoRA was trained on a handful of actual images, and several images generated from previous iterations of the model. It has a monochrome bias, and some poses are overfitted.
The LoRA is well suited for inpainting.
You should bring your own style LoRA, as the inherent style in the LoRA is rough.
# More examples
All of these images are direct output from the LoRA using FutaFactor as the base model. Decent results will require inpainting and patience.




|
Chris808/bloom_prompt_tuning_1694123891.9409811
|
Chris808
| 2023-09-07T22:15:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T22:15:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
dmitvuk/SEMEVAL23_TASK3_SUBTASK1_MULTI
|
dmitvuk
| 2023-09-07T21:41:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-25T12:07:13Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: SEMEVAL23_TASK3_SUBTASK1_MULTI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEMEVAL23_TASK3_SUBTASK1_MULTI
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6313
- F1: 0.6299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9669 | 1.0 | 160 | 0.9574 | 0.4075 |
| 0.4214 | 2.0 | 320 | 0.6809 | 0.5769 |
| 0.0096 | 3.0 | 480 | 1.3114 | 0.4152 |
| 0.2681 | 4.0 | 640 | 0.7792 | 0.6122 |
| 0.0007 | 5.0 | 800 | 1.3213 | 0.5765 |
| 0.0005 | 6.0 | 960 | 1.7983 | 0.5749 |
| 0.0011 | 7.0 | 1120 | 2.2000 | 0.5298 |
| 0.0008 | 8.0 | 1280 | 1.3757 | 0.5812 |
| 0.0007 | 9.0 | 1440 | 1.5493 | 0.5990 |
| 0.001 | 10.0 | 1600 | 1.4796 | 0.6233 |
| 0.0008 | 11.0 | 1760 | 1.4954 | 0.6251 |
| 0.0002 | 12.0 | 1920 | 1.6313 | 0.6299 |
| 0.0004 | 13.0 | 2080 | 1.5037 | 0.6296 |
| 0.0008 | 14.0 | 2240 | 1.5526 | 0.6277 |
| 0.0001 | 15.0 | 2400 | 1.5745 | 0.6254 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
thisisgamal/test101c
|
thisisgamal
| 2023-09-07T21:39:25Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-09-07T21:39:25Z |
---
license: bigscience-openrail-m
---
|
Dyang0/ppo-LunarLander-v2
|
Dyang0
| 2023-09-07T21:34:11Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T21:33:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.98 +/- 13.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GesusFranca/qa_model
|
GesusFranca
| 2023-09-07T21:34:06Z | 42 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt",
"base_model:finetune:mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-07T18:01:36Z |
---
license: apache-2.0
base_model: mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt
tags:
- generated_from_keras_callback
model-index:
- name: GesusFranca/qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GesusFranca/qa_model
This model is a fine-tuned version of [mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt](https://huggingface.co/mrm8488/bert-base-portuguese-cased-finetuned-squad-v1-pt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5147
- Validation Loss: 4.5895
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.0442 | 4.7063 | 0 |
| 4.5147 | 4.5895 | 1 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DrishtiSharma/roberta-large-hate-offensive-normal-speech-lr-2e-05
|
DrishtiSharma
| 2023-09-07T21:16:06Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T21:10:06Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-hate-offensive-normal-speech-lr-2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-hate-offensive-normal-speech-lr-2e-05
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0293
- Accuracy: 0.9837
- Weighted f1: 0.9837
- Weighted recall: 0.9837
- Weighted precision: 0.9839
- Micro f1: 0.9837
- Micro recall: 0.9837
- Micro precision: 0.9837
- Macro f1: 0.9832
- Macro recall: 0.9821
- Macro precision: 0.9845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Weighted recall | Weighted precision | Micro f1 | Micro recall | Micro precision | Macro f1 | Macro recall | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------------:|:--------:|:------------:|:---------------:|:--------:|:------------:|:---------------:|
| 0.5253 | 1.0 | 153 | 0.1270 | 0.9642 | 0.9647 | 0.9642 | 0.9681 | 0.9642 | 0.9642 | 0.9642 | 0.9633 | 0.9662 | 0.9633 |
| 0.0921 | 2.0 | 306 | 0.0878 | 0.9805 | 0.9805 | 0.9805 | 0.9807 | 0.9805 | 0.9805 | 0.9805 | 0.9803 | 0.9791 | 0.9818 |
| 0.0413 | 3.0 | 459 | 0.0590 | 0.9870 | 0.9870 | 0.9870 | 0.9875 | 0.9870 | 0.9870 | 0.9870 | 0.9860 | 0.9869 | 0.9857 |
| 0.0261 | 4.0 | 612 | 0.0523 | 0.9902 | 0.9902 | 0.9902 | 0.9904 | 0.9902 | 0.9902 | 0.9902 | 0.9896 | 0.9896 | 0.9900 |
| 0.012 | 5.0 | 765 | 0.0293 | 0.9837 | 0.9837 | 0.9837 | 0.9839 | 0.9837 | 0.9837 | 0.9837 | 0.9832 | 0.9821 | 0.9845 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.6.dev0
- Tokenizers 0.13.3
|
CyberHarem/bazett_fraga_mcremitz_fatekaleidlinerprismaillya
|
CyberHarem
| 2023-09-07T21:13:20Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/bazett_fraga_mcremitz_fatekaleidlinerprismaillya",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T21:01:09Z |
---
license: mit
datasets:
- CyberHarem/bazett_fraga_mcremitz_fatekaleidlinerprismaillya
pipeline_tag: text-to-image
tags:
- art
---
# Lora of bazett_fraga_mcremitz_fatekaleidlinerprismaillya
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7200, you need to download `7200/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.pt` as the embedding and `7200/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7200**, with the score of 0.957. The trigger words are:
1. `bazett_fraga_mcremitz_fatekaleidlinerprismaillya`
2. `short_hair, purple_hair, purple_eyes, mole, mole_under_eye, formal, suit, red_hair, necktie`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **7200** | **0.957** | [**Download**](7200/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.957 | [Download](6720/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.899 | [Download](6240/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.875 | [Download](5760/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.945 | [Download](5280/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.906 | [Download](4800/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.933 | [Download](4320/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.930 | [Download](3840/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.951 | [Download](3360/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.876 | [Download](2880/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.907 | [Download](2400/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.859 | [Download](1920/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.762 | [Download](1440/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.800 | [Download](960/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.347 | [Download](480/bazett_fraga_mcremitz_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
Sanjay1234/Classification-using-SetFit-head
|
Sanjay1234
| 2023-09-07T21:04:47Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-07T21:02:26Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Sanjay1234/Classification-using-SetFit-head
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Sanjay1234/Classification-using-SetFit-head")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gauthamk28/a2c-PandaReachDense-v2
|
gauthamk28
| 2023-09-07T20:52:26Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T10:02:51Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.07 +/- 0.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
DrishtiSharma/hateBERT-hate-offensive-normal-speech-lr-2e-05
|
DrishtiSharma
| 2023-09-07T20:45:58Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:GroNLP/hateBERT",
"base_model:finetune:GroNLP/hateBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T20:44:17Z |
---
base_model: GroNLP/hateBERT
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hateBERT-hate-offensive-normal-speech-lr-2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hateBERT-hate-offensive-normal-speech-lr-2e-05
This model is a fine-tuned version of [GroNLP/hateBERT](https://huggingface.co/GroNLP/hateBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0207
- Accuracy: 0.9902
- Weighted f1: 0.9902
- Weighted recall: 0.9902
- Weighted precision: 0.9904
- Micro f1: 0.9902
- Micro recall: 0.9902
- Micro precision: 0.9902
- Macro f1: 0.9901
- Macro recall: 0.9903
- Macro precision: 0.9899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Weighted recall | Weighted precision | Micro f1 | Micro recall | Micro precision | Macro f1 | Macro recall | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------------:|:--------:|:------------:|:---------------:|:--------:|:------------:|:---------------:|
| 0.6155 | 1.0 | 153 | 0.0889 | 0.9805 | 0.9805 | 0.9805 | 0.9806 | 0.9805 | 0.9805 | 0.9805 | 0.9801 | 0.9811 | 0.9793 |
| 0.0665 | 2.0 | 306 | 0.0368 | 0.9870 | 0.9870 | 0.9870 | 0.9870 | 0.9870 | 0.9870 | 0.9870 | 0.9864 | 0.9866 | 0.9864 |
| 0.0235 | 3.0 | 459 | 0.0264 | 0.9902 | 0.9902 | 0.9902 | 0.9904 | 0.9902 | 0.9902 | 0.9902 | 0.9901 | 0.9903 | 0.9899 |
| 0.0182 | 4.0 | 612 | 0.0414 | 0.9870 | 0.9870 | 0.9870 | 0.9873 | 0.9870 | 0.9870 | 0.9870 | 0.9865 | 0.9869 | 0.9864 |
| 0.012 | 5.0 | 765 | 0.0207 | 0.9902 | 0.9902 | 0.9902 | 0.9904 | 0.9902 | 0.9902 | 0.9902 | 0.9901 | 0.9903 | 0.9899 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.6.dev0
- Tokenizers 0.13.3
|
vaikunthgc/trial
|
vaikunthgc
| 2023-09-07T20:42:38Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"gpt2",
"en",
"region:us"
] | null | 2023-09-07T20:36:33Z |
---
language:
- en
library_name: adapter-transformers
---
|
DrishtiSharma/fBERT-hate-offensive-normal-speech-lr-2e-05
|
DrishtiSharma
| 2023-09-07T20:40:35Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:diptanu/fBERT",
"base_model:finetune:diptanu/fBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T20:38:53Z |
---
base_model: diptanu/fBERT
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fBERT-hate-offensive-normal-speech-lr-2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fBERT-hate-offensive-normal-speech-lr-2e-05
This model is a fine-tuned version of [diptanu/fBERT](https://huggingface.co/diptanu/fBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0152
- Accuracy: 0.9935
- Weighted f1: 0.9935
- Weighted recall: 0.9935
- Weighted precision: 0.9936
- Micro f1: 0.9935
- Micro recall: 0.9935
- Micro precision: 0.9935
- Macro f1: 0.9932
- Macro recall: 0.9938
- Macro precision: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Weighted recall | Weighted precision | Micro f1 | Micro recall | Micro precision | Macro f1 | Macro recall | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------------:|:--------:|:------------:|:---------------:|:--------:|:------------:|:---------------:|
| 0.4897 | 1.0 | 153 | 0.0784 | 0.9739 | 0.9741 | 0.9739 | 0.9755 | 0.9739 | 0.9739 | 0.9739 | 0.9730 | 0.9744 | 0.9729 |
| 0.0723 | 2.0 | 306 | 0.0183 | 0.9967 | 0.9967 | 0.9967 | 0.9968 | 0.9967 | 0.9967 | 0.9967 | 0.9964 | 0.9965 | 0.9963 |
| 0.027 | 3.0 | 459 | 0.0226 | 0.9935 | 0.9935 | 0.9935 | 0.9936 | 0.9935 | 0.9935 | 0.9935 | 0.9932 | 0.9938 | 0.9927 |
| 0.0139 | 4.0 | 612 | 0.0194 | 0.9902 | 0.9903 | 0.9902 | 0.9905 | 0.9902 | 0.9902 | 0.9902 | 0.9896 | 0.9903 | 0.9891 |
| 0.0119 | 5.0 | 765 | 0.0152 | 0.9935 | 0.9935 | 0.9935 | 0.9936 | 0.9935 | 0.9935 | 0.9935 | 0.9932 | 0.9938 | 0.9927 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.6.dev0
- Tokenizers 0.13.3
|
estonto/fido-gpt
|
estonto
| 2023-09-07T20:35:27Z | 63 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-28T08:24:53Z |
---
language:
- en
---
# FIDO-GPT: Generative AI behind "Fidonet Cybernetic Immortality" Project
[FIDONet](https://en.wikipedia.org/wiki/FidoNet) is a historic computer network based on nightly mail exchange between servers
via telephone lines, which was popular in 1990-s. In [FIDONet Cybernetic Immortality Project](https://soshnikov.com/art/fidoci)
we are looking to create exhibits that will revive now-almost-dead FIDONet by automatically writing correspondence in
FIDONet style via generative large language models.
This model is based on [GPT2-large](https://huggingface.co/gpt2-large) model, and was fine-tuned for 2 epochs on archives of
[ExecPC BBS](https://en.wikipedia.org/wiki/ExecPC_BBS), obtained from [here](https://breakintochat.com/collections/messages/fidonet/index.html).
This process took around 9 hours on NVidia A100 compute in Yandex Datasphere service.
This code can be used for generation:
```python
from transformers import pipeline, AutoModelForCausalLM,AutoTokenizer
import torch
model_name = 'estonto/fido-gpt'
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline(model=model,tokenizer=tokenizer,task="text-generation",device="cuda")
result = pipe("<s>Topic: COMPUTING",do_sample=True,max_length=500)[0]['generated_text'].replace('\\n','\n')
```
Project idea and model training: [Dmitry Soshnikov](https://soshnikov.com)
|
DrishtiSharma/distilbert-base-uncased-hate-offensive-normal-speech-lr-2e-05
|
DrishtiSharma
| 2023-09-07T20:33:47Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T20:32:47Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-hate-offensive-normal-speech-lr-2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-hate-offensive-normal-speech-lr-2e-05
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0178
- Accuracy: 0.9935
- Weighted f1: 0.9935
- Weighted recall: 0.9935
- Weighted precision: 0.9936
- Micro f1: 0.9935
- Micro recall: 0.9935
- Micro precision: 0.9935
- Macro f1: 0.9932
- Macro recall: 0.9938
- Macro precision: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Weighted recall | Weighted precision | Micro f1 | Micro recall | Micro precision | Macro f1 | Macro recall | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------------:|:--------:|:------------:|:---------------:|:--------:|:------------:|:---------------:|
| 0.5013 | 1.0 | 153 | 0.0914 | 0.9642 | 0.9643 | 0.9642 | 0.9649 | 0.9642 | 0.9642 | 0.9642 | 0.9629 | 0.9639 | 0.9623 |
| 0.0924 | 2.0 | 306 | 0.0314 | 0.9935 | 0.9935 | 0.9935 | 0.9936 | 0.9935 | 0.9935 | 0.9935 | 0.9932 | 0.9938 | 0.9927 |
| 0.0432 | 3.0 | 459 | 0.0298 | 0.9870 | 0.9870 | 0.9870 | 0.9875 | 0.9870 | 0.9870 | 0.9870 | 0.9860 | 0.9869 | 0.9857 |
| 0.0217 | 4.0 | 612 | 0.0259 | 0.9902 | 0.9903 | 0.9902 | 0.9905 | 0.9902 | 0.9902 | 0.9902 | 0.9896 | 0.9903 | 0.9891 |
| 0.0148 | 5.0 | 765 | 0.0178 | 0.9935 | 0.9935 | 0.9935 | 0.9936 | 0.9935 | 0.9935 | 0.9935 | 0.9932 | 0.9938 | 0.9927 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.6.dev0
- Tokenizers 0.13.3
|
matgu23/zws2
|
matgu23
| 2023-09-07T20:21:26Z | 9 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-29T23:33:54Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### zws2 Dreambooth model trained by matgu23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
CyberHarem/luviagelita_edelfelt_fatekaleidlinerprismaillya
|
CyberHarem
| 2023-09-07T20:18:51Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/luviagelita_edelfelt_fatekaleidlinerprismaillya",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T20:05:42Z |
---
license: mit
datasets:
- CyberHarem/luviagelita_edelfelt_fatekaleidlinerprismaillya
pipeline_tag: text-to-image
tags:
- art
---
# Lora of luviagelita_edelfelt_fatekaleidlinerprismaillya
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7540, you need to download `7540/luviagelita_edelfelt_fatekaleidlinerprismaillya.pt` as the embedding and `7540/luviagelita_edelfelt_fatekaleidlinerprismaillya.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7540**, with the score of 0.873. The trigger words are:
1. `luviagelita_edelfelt_fatekaleidlinerprismaillya`
2. `long_hair, blonde_hair, drill_hair, ribbon, hair_ribbon, bow, brown_eyes, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8700 | 0.821 | [Download](8700/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8700/previews/nude.png) | [<NSFW, click to see>](8700/previews/nude2.png) |  |  |
| 8120 | 0.811 | [Download](8120/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8120/previews/nude.png) | [<NSFW, click to see>](8120/previews/nude2.png) |  |  |
| **7540** | **0.873** | [**Download**](7540/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7540/previews/nude.png) | [<NSFW, click to see>](7540/previews/nude2.png) |  |  |
| 6960 | 0.805 | [Download](6960/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6960/previews/nude.png) | [<NSFW, click to see>](6960/previews/nude2.png) |  |  |
| 6380 | 0.865 | [Download](6380/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6380/previews/nude.png) | [<NSFW, click to see>](6380/previews/nude2.png) |  |  |
| 5800 | 0.847 | [Download](5800/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5800/previews/nude.png) | [<NSFW, click to see>](5800/previews/nude2.png) |  |  |
| 5220 | 0.868 | [Download](5220/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5220/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5220/previews/nude.png) | [<NSFW, click to see>](5220/previews/nude2.png) |  |  |
| 4640 | 0.834 | [Download](4640/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4640/previews/nude.png) | [<NSFW, click to see>](4640/previews/nude2.png) |  |  |
| 4060 | 0.838 | [Download](4060/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4060/previews/nude.png) | [<NSFW, click to see>](4060/previews/nude2.png) |  |  |
| 3480 | 0.826 | [Download](3480/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3480/previews/nude.png) | [<NSFW, click to see>](3480/previews/nude2.png) |  |  |
| 2900 | 0.832 | [Download](2900/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2900/previews/nude.png) | [<NSFW, click to see>](2900/previews/nude2.png) |  |  |
| 2320 | 0.846 | [Download](2320/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2320/previews/nude.png) | [<NSFW, click to see>](2320/previews/nude2.png) |  |  |
| 1740 | 0.818 | [Download](1740/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1740/previews/nude.png) | [<NSFW, click to see>](1740/previews/nude2.png) |  |  |
| 1160 | 0.832 | [Download](1160/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1160/previews/nude.png) | [<NSFW, click to see>](1160/previews/nude2.png) |  |  |
| 580 | 0.714 | [Download](580/luviagelita_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](580/previews/nude.png) | [<NSFW, click to see>](580/previews/nude2.png) |  |  |
|
DrishtiSharma/bert-large-uncased-hate-offensive-normal-speech-lr-2e-05
|
DrishtiSharma
| 2023-09-07T20:17:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T19:00:48Z |
---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-uncased-hate-offensive-normal-speech-lr-2e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-hate-offensive-normal-speech-lr-2e-05
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0097
- Accuracy: 0.9935
- Weighted f1: 0.9935
- Weighted recall: 0.9935
- Weighted precision: 0.9936
- Micro f1: 0.9935
- Micro recall: 0.9935
- Micro precision: 0.9935
- Macro f1: 0.9932
- Macro recall: 0.9938
- Macro precision: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Weighted recall | Weighted precision | Micro f1 | Micro recall | Micro precision | Macro f1 | Macro recall | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------------:|:--------:|:------------:|:---------------:|:--------:|:------------:|:---------------:|
| 0.5454 | 1.0 | 153 | 0.0953 | 0.9739 | 0.9743 | 0.9739 | 0.9761 | 0.9739 | 0.9739 | 0.9739 | 0.9730 | 0.9752 | 0.9725 |
| 0.0682 | 2.0 | 306 | 0.0285 | 0.9902 | 0.9903 | 0.9902 | 0.9905 | 0.9902 | 0.9902 | 0.9902 | 0.9896 | 0.9903 | 0.9891 |
| 0.025 | 3.0 | 459 | 0.0381 | 0.9902 | 0.9903 | 0.9902 | 0.9905 | 0.9902 | 0.9902 | 0.9902 | 0.9896 | 0.9903 | 0.9891 |
| 0.0212 | 4.0 | 612 | 0.0246 | 0.9902 | 0.9903 | 0.9902 | 0.9905 | 0.9902 | 0.9902 | 0.9902 | 0.9896 | 0.9903 | 0.9891 |
| 0.0114 | 5.0 | 765 | 0.0097 | 0.9935 | 0.9935 | 0.9935 | 0.9936 | 0.9935 | 0.9935 | 0.9935 | 0.9932 | 0.9938 | 0.9927 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.6.dev0
- Tokenizers 0.13.3
|
theodor1289/thesis_halfsize_text10_vision10
|
theodor1289
| 2023-09-07T20:00:50Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"flava",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2023-09-07T19:46:52Z |
flava_half-wit/date(2023-09-04)_time(16:56:17)/seed(5501650)-magic({'enable': True})-text_perc(10)-vision_perc(10/flava-epoch=00-step=13867.ckpt
|
Terps/mt5-small-finetuned-amazon-en-es
|
Terps
| 2023-09-07T19:46:31Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-09-07T18:42:36Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0279
- Rouge1: 16.4284
- Rouge2: 7.8601
- Rougel: 16.0029
- Rougelsum: 16.0246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 4.4194 | 1.0 | 1209 | 3.3097 | 14.9867 | 6.4886 | 14.4174 | 14.4646 |
| 3.8132 | 2.0 | 2418 | 3.1602 | 16.1474 | 7.9815 | 15.5342 | 15.6445 |
| 3.5412 | 3.0 | 3627 | 3.0789 | 17.4468 | 8.8014 | 16.9142 | 17.002 |
| 3.3861 | 4.0 | 4836 | 3.0775 | 15.903 | 7.4423 | 15.4008 | 15.3871 |
| 3.2952 | 5.0 | 6045 | 3.0480 | 15.8646 | 7.3936 | 15.3989 | 15.4395 |
| 3.2155 | 6.0 | 7254 | 3.0354 | 16.5887 | 8.0624 | 16.2377 | 16.2562 |
| 3.1896 | 7.0 | 8463 | 3.0273 | 17.1092 | 8.5391 | 16.6507 | 16.7272 |
| 3.1594 | 8.0 | 9672 | 3.0279 | 16.4284 | 7.8601 | 16.0029 | 16.0246 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Onutoa/1_7e-3_1_0.5
|
Onutoa
| 2023-09-07T19:40:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T16:41:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_7e-3_1_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_7e-3_1_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4732
- Accuracy: 0.7462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.007
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9787 | 1.0 | 590 | 0.7825 | 0.6217 |
| 1.0111 | 2.0 | 1180 | 0.7676 | 0.6021 |
| 0.9238 | 3.0 | 1770 | 0.6005 | 0.6217 |
| 0.8313 | 4.0 | 2360 | 0.6038 | 0.4321 |
| 0.7671 | 5.0 | 2950 | 0.9066 | 0.6217 |
| 0.7472 | 6.0 | 3540 | 0.6074 | 0.4560 |
| 0.7577 | 7.0 | 4130 | 0.6978 | 0.3807 |
| 0.6835 | 8.0 | 4720 | 0.6612 | 0.6217 |
| 0.6855 | 9.0 | 5310 | 0.7161 | 0.6217 |
| 0.6572 | 10.0 | 5900 | 0.5321 | 0.6370 |
| 0.6389 | 11.0 | 6490 | 0.5122 | 0.6621 |
| 0.5993 | 12.0 | 7080 | 0.5795 | 0.6612 |
| 0.587 | 13.0 | 7670 | 0.5287 | 0.6245 |
| 0.5662 | 14.0 | 8260 | 0.4982 | 0.6664 |
| 0.5474 | 15.0 | 8850 | 0.5174 | 0.6453 |
| 0.5533 | 16.0 | 9440 | 0.5125 | 0.6890 |
| 0.5201 | 17.0 | 10030 | 0.4753 | 0.6716 |
| 0.5055 | 18.0 | 10620 | 0.4841 | 0.6755 |
| 0.4886 | 19.0 | 11210 | 0.4682 | 0.7028 |
| 0.4806 | 20.0 | 11800 | 0.4591 | 0.6905 |
| 0.456 | 21.0 | 12390 | 0.4729 | 0.6896 |
| 0.4627 | 22.0 | 12980 | 0.4434 | 0.7003 |
| 0.4301 | 23.0 | 13570 | 0.4426 | 0.7092 |
| 0.4203 | 24.0 | 14160 | 0.4324 | 0.7092 |
| 0.4175 | 25.0 | 14750 | 0.4642 | 0.7275 |
| 0.3993 | 26.0 | 15340 | 0.5582 | 0.6459 |
| 0.3972 | 27.0 | 15930 | 0.4367 | 0.7076 |
| 0.3812 | 28.0 | 16520 | 0.4484 | 0.7278 |
| 0.3726 | 29.0 | 17110 | 0.4581 | 0.7202 |
| 0.3781 | 30.0 | 17700 | 0.4322 | 0.7275 |
| 0.3578 | 31.0 | 18290 | 0.4970 | 0.7217 |
| 0.3458 | 32.0 | 18880 | 0.6182 | 0.7095 |
| 0.3434 | 33.0 | 19470 | 0.4644 | 0.7095 |
| 0.3338 | 34.0 | 20060 | 0.4355 | 0.7199 |
| 0.3344 | 35.0 | 20650 | 0.4495 | 0.7223 |
| 0.3308 | 36.0 | 21240 | 0.4515 | 0.7330 |
| 0.3208 | 37.0 | 21830 | 0.4562 | 0.7373 |
| 0.3012 | 38.0 | 22420 | 0.4464 | 0.7211 |
| 0.3055 | 39.0 | 23010 | 0.4410 | 0.7382 |
| 0.306 | 40.0 | 23600 | 0.5016 | 0.7343 |
| 0.2894 | 41.0 | 24190 | 0.4726 | 0.7364 |
| 0.2834 | 42.0 | 24780 | 0.4714 | 0.7379 |
| 0.2789 | 43.0 | 25370 | 0.4379 | 0.7199 |
| 0.2759 | 44.0 | 25960 | 0.4570 | 0.7287 |
| 0.2667 | 45.0 | 26550 | 0.4500 | 0.7294 |
| 0.2564 | 46.0 | 27140 | 0.4628 | 0.7413 |
| 0.2541 | 47.0 | 27730 | 0.4643 | 0.7379 |
| 0.2498 | 48.0 | 28320 | 0.4406 | 0.7336 |
| 0.2571 | 49.0 | 28910 | 0.4427 | 0.7373 |
| 0.2423 | 50.0 | 29500 | 0.4658 | 0.7315 |
| 0.2374 | 51.0 | 30090 | 0.4744 | 0.7214 |
| 0.2415 | 52.0 | 30680 | 0.5416 | 0.7373 |
| 0.2309 | 53.0 | 31270 | 0.4830 | 0.7226 |
| 0.2282 | 54.0 | 31860 | 0.4758 | 0.7343 |
| 0.2307 | 55.0 | 32450 | 0.4698 | 0.7266 |
| 0.2213 | 56.0 | 33040 | 0.4458 | 0.7446 |
| 0.2193 | 57.0 | 33630 | 0.4778 | 0.7382 |
| 0.214 | 58.0 | 34220 | 0.4828 | 0.7456 |
| 0.207 | 59.0 | 34810 | 0.4818 | 0.7294 |
| 0.21 | 60.0 | 35400 | 0.4614 | 0.7508 |
| 0.2118 | 61.0 | 35990 | 0.4507 | 0.7480 |
| 0.2031 | 62.0 | 36580 | 0.4718 | 0.7416 |
| 0.1987 | 63.0 | 37170 | 0.4752 | 0.7324 |
| 0.2018 | 64.0 | 37760 | 0.4431 | 0.7388 |
| 0.1889 | 65.0 | 38350 | 0.4769 | 0.7385 |
| 0.1941 | 66.0 | 38940 | 0.4623 | 0.7443 |
| 0.1898 | 67.0 | 39530 | 0.4818 | 0.7355 |
| 0.1872 | 68.0 | 40120 | 0.4678 | 0.7446 |
| 0.1813 | 69.0 | 40710 | 0.4843 | 0.7529 |
| 0.1893 | 70.0 | 41300 | 0.4702 | 0.7459 |
| 0.1885 | 71.0 | 41890 | 0.4931 | 0.7193 |
| 0.1811 | 72.0 | 42480 | 0.4854 | 0.7477 |
| 0.1755 | 73.0 | 43070 | 0.4848 | 0.7373 |
| 0.1768 | 74.0 | 43660 | 0.4867 | 0.7520 |
| 0.1728 | 75.0 | 44250 | 0.5011 | 0.7477 |
| 0.1791 | 76.0 | 44840 | 0.4876 | 0.7416 |
| 0.1733 | 77.0 | 45430 | 0.4920 | 0.7486 |
| 0.1745 | 78.0 | 46020 | 0.4711 | 0.7492 |
| 0.1741 | 79.0 | 46610 | 0.4661 | 0.7401 |
| 0.1706 | 80.0 | 47200 | 0.4670 | 0.7422 |
| 0.165 | 81.0 | 47790 | 0.4736 | 0.7459 |
| 0.1612 | 82.0 | 48380 | 0.4660 | 0.7459 |
| 0.1722 | 83.0 | 48970 | 0.4772 | 0.7410 |
| 0.1638 | 84.0 | 49560 | 0.4767 | 0.7434 |
| 0.1613 | 85.0 | 50150 | 0.4641 | 0.7391 |
| 0.1649 | 86.0 | 50740 | 0.4783 | 0.7450 |
| 0.1609 | 87.0 | 51330 | 0.4734 | 0.7453 |
| 0.1588 | 88.0 | 51920 | 0.4919 | 0.7508 |
| 0.1601 | 89.0 | 52510 | 0.4698 | 0.7453 |
| 0.1573 | 90.0 | 53100 | 0.4765 | 0.7508 |
| 0.1584 | 91.0 | 53690 | 0.4754 | 0.7492 |
| 0.1587 | 92.0 | 54280 | 0.4704 | 0.7413 |
| 0.1521 | 93.0 | 54870 | 0.4865 | 0.7505 |
| 0.1546 | 94.0 | 55460 | 0.4777 | 0.7505 |
| 0.1539 | 95.0 | 56050 | 0.4791 | 0.7526 |
| 0.1545 | 96.0 | 56640 | 0.4721 | 0.7456 |
| 0.1533 | 97.0 | 57230 | 0.4725 | 0.7407 |
| 0.1476 | 98.0 | 57820 | 0.4709 | 0.7462 |
| 0.1489 | 99.0 | 58410 | 0.4731 | 0.7459 |
| 0.1501 | 100.0 | 59000 | 0.4732 | 0.7462 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
billfass/my_bert_model
|
billfass
| 2023-09-07T19:31:20Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-07T15:19:43Z |
# Custom BERT Model for Text Classification
## Model Description
This is a custom BERT model fine-tuned for text classification. The model was trained using a subset of a publicly available dataset and is capable of classifying text into 3 classes.
## Training Details
- **Architecture**: BERT Base Multilingual Cased
- **Training data**: Custom dataset
- **Preprocessing**: Tokenized using BERT's tokenizer, with a max sequence length of 80.
- **Fine-tuning**: The model was trained for 1 epoch with a learning rate of 2e-5, using AdamW optimizer and Cross-Entropy Loss.
- **Evaluation Metrics**: Accuracy on a held-out validation set.
## How to Use
### Dependencies
- Transformers 4.x
- Torch 1.x
### Code Snippet
For classification:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("billfass/my_bert_model")
model = AutoModelForSequenceClassification.from_pretrained("billfass/my_bert_model")
text = "Your example text here."
inputs = tokenizer(text, padding=True, truncation=True, max_length=80, return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
# To get probabilities:
probs = torch.softmax(logits, dim=-1)
```
## Limitations and Bias
- Trained on a specific dataset, so may not generalize well to other kinds of text.
- Uses multilingual cased BERT, so it's not optimized for any specific language.
## Authors
- **Fassinou Bile**
- **billfass2010@gmail.com**
## Acknowledgments
Special thanks to Hugging Face for providing the Transformers library that made this project possible.
---
|
PHL99/dqn-SpaceInvaders-v4-NoFrameskip
|
PHL99
| 2023-09-07T19:19:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T19:18:55Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 535.50 +/- 107.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PHL99 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PHL99 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PHL99
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TencentARC/t2i-adapter-depth-midas-sdxl-1.0
|
TencentARC
| 2023-09-07T19:11:24Z | 5,283 | 31 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"t2i-adapter",
"image-to-image",
"stable-diffusion-xl-diffusers",
"stable-diffusion-xl",
"arxiv:2302.08453",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-09-03T14:46:44Z |
---
license: apache-2.0
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- art
- t2i-adapter
- image-to-image
- stable-diffusion-xl-diffusers
- stable-diffusion-xl
---
# T2I-Adapter-SDXL - Depth-MiDaS
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
This checkpoint provides conditioning on depth for the StableDiffusionXL checkpoint. This was a collaboration between **Tencent ARC** and [**Hugging Face**](https://huggingface.co/).
## Model Details
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** Apache 2.0
- **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453).
- **Model complexity:**
| | SD-V1.4/1.5 | SD-XL | T2I-Adapter | T2I-Adapter-SDXL |
| --- | --- |--- |--- |--- |
| Parameters | 860M | 2.6B |77 M | 77/79 M | |
- **Cite as:**
@misc{
title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models},
author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie},
year={2023},
eprint={2302.08453},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
### Checkpoints
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
|[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
|[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
|[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
|[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
|[TencentARC/t2i-adapter-openpose-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-openpose-sdxl-1.0)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
## Example
To get started, first install the required dependencies:
```bash
pip install -U git+https://github.com/huggingface/diffusers.git
pip install -U controlnet_aux==0.0.7 # for conditioning models and detectors
pip install transformers accelerate safetensors
```
1. Images are first downloaded into the appropriate *control image* format.
2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0).
- Dependency
```py
from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
from diffusers.utils import load_image, make_image_grid
from controlnet_aux.midas import MidasDetector
import torch
# load adapter
adapter = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
).to("cuda")
# load euler_a scheduler
model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
midas_depth = MidasDetector.from_pretrained(
"valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
).to("cuda")
```
- Condition Image
```py
url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_mid.jpg"
image = load_image(url)
image = midas_depth(
image, detect_resolution=512, image_resolution=1024
)
```
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>
- Generation
```py
prompt = "A photo of a room, 4k photo, highly detailed"
negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
gen_images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
image=image,
num_inference_steps=30,
adapter_conditioning_scale=1,
guidance_scale=7.5,
).images[0]
gen_images.save('out_mid.png')
```
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>
### Training
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/README_sdxl.md).
The model is trained on 3M high-resolution image-text pairs from LAION-Aesthetics V2 with
- Training steps: 35000
- Batch size: Data parallel with a single gpu batch size of `16` for a total batch size of `256`.
- Learning rate: Constant learning rate of `1e-5`.
- Mixed precision: fp16
|
TencentARC/t2i-adapter-canny-sdxl-1.0
|
TencentARC
| 2023-09-07T19:10:05Z | 6,149 | 48 |
diffusers
|
[
"diffusers",
"safetensors",
"art",
"t2i-adapter",
"image-to-image",
"stable-diffusion-xl-diffusers",
"stable-diffusion-xl",
"arxiv:2302.08453",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-09-03T14:19:29Z |
---
license: apache-2.0
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- art
- t2i-adapter
- image-to-image
- stable-diffusion-xl-diffusers
- stable-diffusion-xl
---
# T2I-Adapter-SDXL - Canny
T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
This checkpoint provides conditioning on canny for the StableDiffusionXL checkpoint. This was a collaboration between **Tencent ARC** and [**Hugging Face**](https://huggingface.co/).
## Model Details
- **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** Apache 2.0
- **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453).
- **Model complexity:**
| | SD-V1.4/1.5 | SD-XL | T2I-Adapter | T2I-Adapter-SDXL |
| --- | --- |--- |--- |--- |
| Parameters | 860M | 2.6B |77 M | 77/79 M | |
- **Cite as:**
@misc{
title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models},
author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie},
year={2023},
eprint={2302.08453},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
### Checkpoints
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
|[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
|[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
|[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
|[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
|[TencentARC/t2i-adapter-openpose-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-openpose-sdxl-1.0)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
## Example
To get started, first install the required dependencies:
```bash
pip install -U git+https://github.com/huggingface/diffusers.git
pip install -U controlnet_aux==0.0.7 # for conditioning models and detectors
pip install transformers accelerate safetensors
```
1. Images are first downloaded into the appropriate *control image* format.
2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/Adapter/t2iadapter_canny_sdxlv1).
- Dependency
```py
from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
from diffusers.utils import load_image, make_image_grid
from controlnet_aux.canny import CannyDetector
import torch
# load adapter
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda")
# load euler_a scheduler
model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
canny_detector = CannyDetector()
```
- Condition Image
```py
url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg"
image = load_image(url)
# Detect the canny map in low resolution to avoid high-frequency details
image = canny_detector(image, detect_resolution=384, image_resolution=1024)#.resize((1024, 1024))
```
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>
- Generation
```py
prompt = "Mystical fairy in real, magic, 4k picture, high quality"
negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured"
gen_images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
image=image,
num_inference_steps=30,
guidance_scale=7.5,
adapter_conditioning_scale=0.8,
adapter_conditioning_factor=1
).images[0]
gen_images.save('out_canny.png')
```
<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>
### Training
Our training script was built on top of the official training script that we provide [here](https://github.com/huggingface/diffusers/blob/main/examples/t2i_adapter/README_sdxl.md).
The model is trained on 3M high-resolution image-text pairs from LAION-Aesthetics V2 with
- Training steps: 20000
- Batch size: Data parallel with a single gpu batch size of `16` for a total batch size of `256`.
- Learning rate: Constant learning rate of `1e-5`.
- Mixed precision: fp16
|
slhoefel/distilbert-base-uncased-DON-mask-lemma
|
slhoefel
| 2023-09-07T19:09:30Z | 78 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-07T18:42:57Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: slhoefel/distilbert-base-uncased-DON-mask-lemma
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# slhoefel/distilbert-base-uncased-DON-mask-lemma
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6919
- Validation Loss: 3.3336
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -967, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.6919 | 3.3336 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Koltunov-Matthew/my_model
|
Koltunov-Matthew
| 2023-09-07T19:08:26Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-07T16:05:37Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0550
- Rouge1: 0.4076
- Rouge2: 0.2169
- Rougel: 0.3655
- Rougelsum: 0.3654
- Gen Len: 14.4845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.3027 | 1.0 | 6750 | 2.1348 | 0.3964 | 0.2084 | 0.3554 | 0.3552 | 14.5291 |
| 2.2589 | 2.0 | 13500 | 2.0818 | 0.4021 | 0.2127 | 0.3603 | 0.3602 | 14.6178 |
| 2.227 | 3.0 | 20250 | 2.0605 | 0.4067 | 0.2167 | 0.365 | 0.3649 | 14.4537 |
| 2.2137 | 4.0 | 27000 | 2.0550 | 0.4076 | 0.2169 | 0.3655 | 0.3654 | 14.4845 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
facebook/mask2former-swin-large-cityscapes-panoptic
|
facebook
| 2023-09-07T18:57:04Z | 1,317 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-01-03T11:42:47Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on Cityscapes panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-cityscapes-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
|
facebook/maskformer-swin-small-ade
|
facebook
| 2023-09-07T18:56:38Z | 309 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"maskformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: other
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# MaskFormer
MaskFormer model trained on ADE20k semantic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-ade")
inputs = feature_extractor(images=image, return_tensors="pt")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-ade")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
|
fastbond/llama-2-7b-guanaco-dolly-test-500step
|
fastbond
| 2023-09-07T18:54:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T18:54:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
prashanth07/ard_docs_check
|
prashanth07
| 2023-09-07T18:47:54Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T18:40:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
osieosie/bloom-mnli-4bit-560m-bnb-seed42
|
osieosie
| 2023-09-07T18:42:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-06T17:22:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
badhorse666/q-taxi-v3
|
badhorse666
| 2023-09-07T18:40:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-07T18:38:56Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="badhorse666/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rigon-tk/ppo-Huggy
|
rigon-tk
| 2023-09-07T18:36:55Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-07T18:36:47Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rigon-tk/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CristianEstupinan/ppo-LunarLander-v2
|
CristianEstupinan
| 2023-09-07T18:33:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-06T11:05:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.21 +/- 25.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bongo2112/sd-15-db-mulokoziepk
|
bongo2112
| 2023-09-07T18:16:02Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2023-09-02T16:30:48Z |
---
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of mulokoziepk man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
mahimairaja/tweet-summarization-llama-2-finetuned
|
mahimairaja
| 2023-09-07T18:09:56Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:Salesforce/dialogstudio",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-07T10:43:33Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- Salesforce/dialogstudio
model-index:
- name: tweet-summarization-llama-2-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet-summarization-llama-2-finetuned
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the Salesforce/dialogstudio dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8996 | 1.0 | 55 | 1.9491 |
| 1.8415 | 2.0 | 110 | 1.8857 |
| 1.7693 | 3.0 | 165 | 1.8749 |
| 1.7136 | 4.0 | 220 | 1.8678 |
| 1.7533 | 5.0 | 275 | 1.8663 |
| 1.6182 | 6.0 | 330 | 1.8665 |
| 1.69 | 7.0 | 385 | 1.8672 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
anupamtripathi/model_sd2_new_data
|
anupamtripathi
| 2023-09-07T17:55:56Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-07T04:10:04Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of Beavertail Pastry food products
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - anupamtripathi/model_sd2_new_data
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were trained on a photo of Beavertail Pastry food products using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Sonny4Sonnix/movie_sentiment_trainer
|
Sonny4Sonnix
| 2023-09-07T17:49:58Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:Sonny4Sonnix/twitter-roberta-base-sentimental-analysis-of-covid-tweets",
"base_model:finetune:Sonny4Sonnix/twitter-roberta-base-sentimental-analysis-of-covid-tweets",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T08:45:17Z |
---
base_model: Sonny4Sonnix/twitter-roberta-base-sentimental-analysis-of-covid-tweets
tags:
- generated_from_trainer
model-index:
- name: movie_sentiment_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movie_sentiment_trainer
This model is a fine-tuned version of [Sonny4Sonnix/twitter-roberta-base-sentimental-analysis-of-covid-tweets](https://huggingface.co/Sonny4Sonnix/twitter-roberta-base-sentimental-analysis-of-covid-tweets) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7168 | 0.2 | 500 | 0.6982 |
| 0.7017 | 0.4 | 1000 | 0.6971 |
| 0.6995 | 0.6 | 1500 | 0.7128 |
| 0.7027 | 0.8 | 2000 | 0.7011 |
| 0.7046 | 1.0 | 2500 | 0.6937 |
| 0.698 | 1.2 | 3000 | 0.6938 |
| 0.6988 | 1.4 | 3500 | 0.6932 |
| 0.6972 | 1.6 | 4000 | 0.6935 |
| 0.698 | 1.8 | 4500 | 0.6940 |
| 0.6975 | 2.0 | 5000 | 0.6973 |
| 0.6977 | 2.2 | 5500 | 0.6932 |
| 0.6955 | 2.4 | 6000 | 0.6933 |
| 0.6952 | 2.6 | 6500 | 0.6932 |
| 0.6946 | 2.8 | 7000 | 0.6941 |
| 0.6944 | 3.0 | 7500 | 0.6934 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
chunyuu/results
|
chunyuu
| 2023-09-07T17:49:34Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-07T17:46:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Onutoa/1_9e-3_5_0.1
|
Onutoa
| 2023-09-07T17:44:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T14:46:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_9e-3_5_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_9e-3_5_0.1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9096
- Accuracy: 0.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.009
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6689 | 1.0 | 590 | 1.8930 | 0.3792 |
| 1.4177 | 2.0 | 1180 | 1.1713 | 0.6217 |
| 1.4671 | 3.0 | 1770 | 0.9910 | 0.4239 |
| 1.2704 | 4.0 | 2360 | 1.0000 | 0.4969 |
| 1.1101 | 5.0 | 2950 | 0.8316 | 0.6459 |
| 1.0767 | 6.0 | 3540 | 0.9325 | 0.6428 |
| 1.0047 | 7.0 | 4130 | 1.4778 | 0.4725 |
| 0.9251 | 8.0 | 4720 | 0.7582 | 0.6801 |
| 0.8846 | 9.0 | 5310 | 0.8984 | 0.6737 |
| 0.8439 | 10.0 | 5900 | 0.8034 | 0.7018 |
| 0.8068 | 11.0 | 6490 | 0.8305 | 0.6624 |
| 0.7643 | 12.0 | 7080 | 1.0910 | 0.5859 |
| 0.7306 | 13.0 | 7670 | 0.7682 | 0.6908 |
| 0.6488 | 14.0 | 8260 | 0.7171 | 0.7226 |
| 0.6521 | 15.0 | 8850 | 0.6864 | 0.7202 |
| 0.6048 | 16.0 | 9440 | 0.7442 | 0.7260 |
| 0.5536 | 17.0 | 10030 | 1.0092 | 0.6532 |
| 0.5654 | 18.0 | 10620 | 0.7884 | 0.7052 |
| 0.5349 | 19.0 | 11210 | 0.7640 | 0.7073 |
| 0.4958 | 20.0 | 11800 | 0.7724 | 0.7343 |
| 0.4706 | 21.0 | 12390 | 0.7728 | 0.7183 |
| 0.459 | 22.0 | 12980 | 0.7394 | 0.7254 |
| 0.4362 | 23.0 | 13570 | 0.7550 | 0.7196 |
| 0.4176 | 24.0 | 14160 | 0.7744 | 0.7248 |
| 0.4012 | 25.0 | 14750 | 0.8998 | 0.7364 |
| 0.388 | 26.0 | 15340 | 0.9046 | 0.7104 |
| 0.3852 | 27.0 | 15930 | 0.7894 | 0.7278 |
| 0.3737 | 28.0 | 16520 | 0.8274 | 0.7391 |
| 0.3456 | 29.0 | 17110 | 0.7725 | 0.7471 |
| 0.34 | 30.0 | 17700 | 0.9009 | 0.7260 |
| 0.3247 | 31.0 | 18290 | 0.7733 | 0.7398 |
| 0.3197 | 32.0 | 18880 | 0.8370 | 0.7385 |
| 0.3109 | 33.0 | 19470 | 0.8705 | 0.7269 |
| 0.3047 | 34.0 | 20060 | 0.8475 | 0.7373 |
| 0.2815 | 35.0 | 20650 | 0.9676 | 0.7407 |
| 0.2782 | 36.0 | 21240 | 0.8183 | 0.7450 |
| 0.2808 | 37.0 | 21830 | 0.8551 | 0.7394 |
| 0.2639 | 38.0 | 22420 | 0.9552 | 0.7440 |
| 0.2599 | 39.0 | 23010 | 0.8785 | 0.7422 |
| 0.2563 | 40.0 | 23600 | 1.0538 | 0.7364 |
| 0.2471 | 41.0 | 24190 | 0.9479 | 0.7502 |
| 0.2524 | 42.0 | 24780 | 0.9348 | 0.7398 |
| 0.2419 | 43.0 | 25370 | 0.9101 | 0.7401 |
| 0.2338 | 44.0 | 25960 | 0.8726 | 0.7394 |
| 0.2218 | 45.0 | 26550 | 0.8953 | 0.7416 |
| 0.2115 | 46.0 | 27140 | 0.8966 | 0.7291 |
| 0.2234 | 47.0 | 27730 | 0.9359 | 0.7416 |
| 0.2047 | 48.0 | 28320 | 0.9434 | 0.7284 |
| 0.2218 | 49.0 | 28910 | 0.9202 | 0.7465 |
| 0.2075 | 50.0 | 29500 | 0.8866 | 0.7394 |
| 0.1982 | 51.0 | 30090 | 0.9081 | 0.7358 |
| 0.2064 | 52.0 | 30680 | 0.9691 | 0.7321 |
| 0.1955 | 53.0 | 31270 | 0.9527 | 0.7275 |
| 0.2006 | 54.0 | 31860 | 0.8744 | 0.7456 |
| 0.2021 | 55.0 | 32450 | 0.9529 | 0.7419 |
| 0.1932 | 56.0 | 33040 | 0.9040 | 0.7391 |
| 0.1823 | 57.0 | 33630 | 0.9188 | 0.7382 |
| 0.1726 | 58.0 | 34220 | 0.8715 | 0.7385 |
| 0.1867 | 59.0 | 34810 | 0.9165 | 0.7410 |
| 0.1831 | 60.0 | 35400 | 0.9393 | 0.7431 |
| 0.1741 | 61.0 | 35990 | 0.9843 | 0.7502 |
| 0.1687 | 62.0 | 36580 | 0.9161 | 0.7419 |
| 0.1712 | 63.0 | 37170 | 0.9630 | 0.7431 |
| 0.1742 | 64.0 | 37760 | 0.9306 | 0.7443 |
| 0.1721 | 65.0 | 38350 | 0.9384 | 0.7446 |
| 0.1614 | 66.0 | 38940 | 0.9237 | 0.7401 |
| 0.1631 | 67.0 | 39530 | 0.9315 | 0.7404 |
| 0.1626 | 68.0 | 40120 | 0.8884 | 0.7434 |
| 0.1547 | 69.0 | 40710 | 0.9163 | 0.7483 |
| 0.1609 | 70.0 | 41300 | 0.9340 | 0.7422 |
| 0.1592 | 71.0 | 41890 | 0.9292 | 0.7352 |
| 0.1588 | 72.0 | 42480 | 0.8887 | 0.7495 |
| 0.1504 | 73.0 | 43070 | 0.9228 | 0.7480 |
| 0.1422 | 74.0 | 43660 | 0.9570 | 0.7361 |
| 0.1535 | 75.0 | 44250 | 0.9705 | 0.7446 |
| 0.1486 | 76.0 | 44840 | 0.9364 | 0.7477 |
| 0.146 | 77.0 | 45430 | 0.9385 | 0.7517 |
| 0.1519 | 78.0 | 46020 | 0.8991 | 0.7495 |
| 0.148 | 79.0 | 46610 | 0.9516 | 0.7483 |
| 0.1388 | 80.0 | 47200 | 0.9189 | 0.7462 |
| 0.1392 | 81.0 | 47790 | 0.8985 | 0.7474 |
| 0.1426 | 82.0 | 48380 | 0.9112 | 0.7459 |
| 0.1388 | 83.0 | 48970 | 0.9468 | 0.7456 |
| 0.1396 | 84.0 | 49560 | 0.9185 | 0.7474 |
| 0.1316 | 85.0 | 50150 | 0.9230 | 0.7434 |
| 0.1332 | 86.0 | 50740 | 0.9365 | 0.7388 |
| 0.1245 | 87.0 | 51330 | 0.9405 | 0.7502 |
| 0.1283 | 88.0 | 51920 | 0.9384 | 0.7453 |
| 0.1309 | 89.0 | 52510 | 0.9250 | 0.7483 |
| 0.127 | 90.0 | 53100 | 0.9176 | 0.7434 |
| 0.124 | 91.0 | 53690 | 0.9207 | 0.7446 |
| 0.1294 | 92.0 | 54280 | 0.8949 | 0.7489 |
| 0.1322 | 93.0 | 54870 | 0.9154 | 0.7495 |
| 0.1242 | 94.0 | 55460 | 0.9033 | 0.7508 |
| 0.1251 | 95.0 | 56050 | 0.9201 | 0.7502 |
| 0.1174 | 96.0 | 56640 | 0.9043 | 0.7480 |
| 0.1284 | 97.0 | 57230 | 0.9111 | 0.7489 |
| 0.1188 | 98.0 | 57820 | 0.9175 | 0.7489 |
| 0.1201 | 99.0 | 58410 | 0.9150 | 0.7498 |
| 0.1229 | 100.0 | 59000 | 0.9096 | 0.7495 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CristianEstupinan/ppo-Huggy
|
CristianEstupinan
| 2023-09-07T17:44:17Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-07T17:44:13Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: CristianEstupinan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chakra17/pokemon-lora
|
chakra17
| 2023-09-07T17:38:09Z | 3 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-07T17:19:03Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - chakra17/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
mariogiordano/Bert-emotion-analysis
|
mariogiordano
| 2023-09-07T17:38:06Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T16:38:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Bert-emotion-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-emotion-analysis
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1244
- Accuracy: 0.6220
- F1: 0.6112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 83 | 1.3491 | 0.5572 | 0.5410 |
| No log | 2.0 | 166 | 1.1244 | 0.6220 | 0.6112 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
vinayaksodar/ppo-Huggy
|
vinayaksodar
| 2023-09-07T17:06:05Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-07T17:05:53Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vinayaksodar/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0
|
turing-motors
| 2023-09-07T16:59:14Z | 6 | 8 |
transformers
|
[
"transformers",
"pytorch",
"video_blip",
"text2text-generation",
"heron",
"vision",
"image-captioning",
"VQA",
"image-to-text",
"ja",
"arxiv:2301.12597",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2023-09-06T09:31:44Z |
---
language:
- ja
tags:
- heron
- vision
- image-captioning
- VQA
pipeline_tag: image-to-text
license:
- cc-by-nc-4.0
inference: false
---
# Heron BLIP Japanese StableLM Base 7B

## DEMO
You can play the demo of this model [here](https://huggingface.co/spaces/turing-motors/heron_chat_blip).
## Model Details
Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br>
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
## Usage
Follow [the installation guide](https://github.com/turingmotors/heron/tree/dev-0.0.1#1-clone-this-repository).
```python
import torch
from heron.models.video_blip import VideoBlipForConditionalGeneration, VideoBlipProcessor
from transformers import LlamaTokenizer
device_id = 0
device = f"cuda:{device_id}"
max_length = 512
MODEL_NAME = "turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0"
model = VideoBlipForConditionalGeneration.from_pretrained(
MODEL_NAME, torch_dtype=torch.float16, ignore_mismatched_sizes=True
)
model = model.half()
model.eval()
model.to(device)
# prepare a processor
processor = VideoBlipProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁'])
processor.tokenizer = tokenizer
import requests
from PIL import Image
# prepare inputs
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = f"##human: この画像の面白い点は何ですか?\n##gpt: "
# do preprocessing
inputs = processor(
text=text,
images=image,
return_tensors="pt",
truncation=True,
)
inputs = {k: v.to(device) for k, v in inputs.items()}
inputs["pixel_values"] = inputs["pixel_values"].to(device, torch.float16)
# set eos token
eos_token_id_list = [
processor.tokenizer.pad_token_id,
processor.tokenizer.eos_token_id,
int(tokenizer.convert_tokens_to_ids("##"))
]
# do inference
with torch.no_grad():
out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list, no_repeat_ngram_size=2)
# print result
print(processor.tokenizer.batch_decode(out))
```
## Model Details
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
* **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597)
* **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)
* **Language(s)**: Japanese
### Training
This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) and Japanese Visual Genome using LoRA.
### Training Dataset
- [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA)
- [Japanese STAIR Captions](http://captions.stair.center/)
- [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa)
## Use and Limitations
### Intended Use
This model is intended for use in chat-like applications and for research purposes.
### Limitations
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
## How to cite
```bibtex
@misc{BlipJapaneseStableLM,
url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)},
title = {Heron BLIP Japanese StableLM Base 7B},
author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi}
}
```
## Citations
```bibtex
@misc{JapaneseInstructBLIPAlpha,
url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)},
title = {Japanese InstructBLIP Alpha},
author = {Shing, Makoto and Akiba, Takuya}
}
```
---
license: cc-by-nc-4.0
---
|
CyberHarem/miyu_edelfelt_fatekaleidlinerprismaillya
|
CyberHarem
| 2023-09-07T16:45:08Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/miyu_edelfelt_fatekaleidlinerprismaillya",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-07T16:24:45Z |
---
license: mit
datasets:
- CyberHarem/miyu_edelfelt_fatekaleidlinerprismaillya
pipeline_tag: text-to-image
tags:
- art
---
# Lora of miyu_edelfelt_fatekaleidlinerprismaillya
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6800, you need to download `6800/miyu_edelfelt_fatekaleidlinerprismaillya.pt` as the embedding and `6800/miyu_edelfelt_fatekaleidlinerprismaillya.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6800**, with the score of 0.709. The trigger words are:
1. `miyu_edelfelt_fatekaleidlinerprismaillya`
2. `black_hair, brown_eyes, hair_ornament, hairclip, bangs, long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:------------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------|
| 10200 | 0.705 | [Download](10200/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10200/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](10200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10200/previews/nude.png) | [<NSFW, click to see>](10200/previews/nude2.png) |  |  |
| 9520 | 0.687 | [Download](9520/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9520/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](9520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9520/previews/nude.png) | [<NSFW, click to see>](9520/previews/nude2.png) |  |  |
| 8840 | 0.647 | [Download](8840/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8840/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](8840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8840/previews/nude.png) | [<NSFW, click to see>](8840/previews/nude2.png) |  |  |
| 8160 | 0.694 | [Download](8160/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8160/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](8160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8160/previews/nude.png) | [<NSFW, click to see>](8160/previews/nude2.png) |  |  |
| 7480 | 0.708 | [Download](7480/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7480/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](7480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7480/previews/nude.png) | [<NSFW, click to see>](7480/previews/nude2.png) |  |  |
| **6800** | **0.709** | [**Download**](6800/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6800/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](6800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6800/previews/nude.png) | [<NSFW, click to see>](6800/previews/nude2.png) |  |  |
| 6120 | 0.685 | [Download](6120/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6120/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](6120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6120/previews/nude.png) | [<NSFW, click to see>](6120/previews/nude2.png) |  |  |
| 5440 | 0.680 | [Download](5440/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5440/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](5440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5440/previews/nude.png) | [<NSFW, click to see>](5440/previews/nude2.png) |  |  |
| 4760 | 0.649 | [Download](4760/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4080 | 0.646 | [Download](4080/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3400 | 0.620 | [Download](3400/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 2720 | 0.622 | [Download](2720/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2040 | 0.447 | [Download](2040/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1360 | 0.386 | [Download](1360/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 680 | 0.254 | [Download](680/miyu_edelfelt_fatekaleidlinerprismaillya.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/pattern_15.png) |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
|
nagupv/stablebeluga7b_ckpoint1500
|
nagupv
| 2023-09-07T16:43:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-07T16:43:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Onutoa/1_7e-3_10_0.1
|
Onutoa
| 2023-09-07T16:41:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-07T13:40:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_7e-3_10_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_7e-3_10_0.1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9819
- Accuracy: 0.7303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.007
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4686 | 1.0 | 590 | 2.1510 | 0.3798 |
| 1.4409 | 2.0 | 1180 | 1.6620 | 0.6214 |
| 1.3336 | 3.0 | 1770 | 2.9692 | 0.3789 |
| 1.3331 | 4.0 | 2360 | 0.9502 | 0.6306 |
| 1.1121 | 5.0 | 2950 | 1.0075 | 0.6294 |
| 1.1211 | 6.0 | 3540 | 0.8872 | 0.6612 |
| 1.0596 | 7.0 | 4130 | 2.2995 | 0.4128 |
| 0.9931 | 8.0 | 4720 | 0.9438 | 0.6810 |
| 0.9235 | 9.0 | 5310 | 0.8872 | 0.6581 |
| 0.9613 | 10.0 | 5900 | 1.2425 | 0.5847 |
| 0.9177 | 11.0 | 6490 | 0.8943 | 0.6862 |
| 0.7985 | 12.0 | 7080 | 0.8038 | 0.6884 |
| 0.7943 | 13.0 | 7670 | 0.8016 | 0.6924 |
| 0.7742 | 14.0 | 8260 | 0.7611 | 0.7162 |
| 0.7373 | 15.0 | 8850 | 0.8728 | 0.7128 |
| 0.7054 | 16.0 | 9440 | 0.7415 | 0.7116 |
| 0.6589 | 17.0 | 10030 | 0.7437 | 0.7070 |
| 0.6449 | 18.0 | 10620 | 1.1703 | 0.6303 |
| 0.5872 | 19.0 | 11210 | 0.7583 | 0.7217 |
| 0.6065 | 20.0 | 11800 | 0.8280 | 0.7196 |
| 0.5721 | 21.0 | 12390 | 0.8555 | 0.7012 |
| 0.5955 | 22.0 | 12980 | 0.8109 | 0.7147 |
| 0.5202 | 23.0 | 13570 | 0.7935 | 0.7245 |
| 0.5017 | 24.0 | 14160 | 0.8676 | 0.6976 |
| 0.4923 | 25.0 | 14750 | 0.9052 | 0.7346 |
| 0.4774 | 26.0 | 15340 | 1.5937 | 0.5976 |
| 0.4714 | 27.0 | 15930 | 0.8523 | 0.7220 |
| 0.4439 | 28.0 | 16520 | 0.8909 | 0.7278 |
| 0.4227 | 29.0 | 17110 | 0.9224 | 0.7321 |
| 0.4029 | 30.0 | 17700 | 0.8559 | 0.7245 |
| 0.4015 | 31.0 | 18290 | 0.9032 | 0.7309 |
| 0.3923 | 32.0 | 18880 | 0.9003 | 0.7327 |
| 0.3897 | 33.0 | 19470 | 0.9786 | 0.6966 |
| 0.354 | 34.0 | 20060 | 0.8606 | 0.7251 |
| 0.3508 | 35.0 | 20650 | 0.8788 | 0.7278 |
| 0.3293 | 36.0 | 21240 | 1.1236 | 0.7214 |
| 0.3336 | 37.0 | 21830 | 0.9196 | 0.7266 |
| 0.3407 | 38.0 | 22420 | 0.9319 | 0.7220 |
| 0.3338 | 39.0 | 23010 | 0.8982 | 0.7321 |
| 0.3065 | 40.0 | 23600 | 0.9969 | 0.7333 |
| 0.2972 | 41.0 | 24190 | 1.0879 | 0.7309 |
| 0.2904 | 42.0 | 24780 | 0.9547 | 0.7327 |
| 0.2883 | 43.0 | 25370 | 0.9553 | 0.7187 |
| 0.2889 | 44.0 | 25960 | 0.9805 | 0.7251 |
| 0.269 | 45.0 | 26550 | 0.9516 | 0.7321 |
| 0.2573 | 46.0 | 27140 | 0.9094 | 0.7242 |
| 0.2679 | 47.0 | 27730 | 0.9398 | 0.7217 |
| 0.2595 | 48.0 | 28320 | 1.0380 | 0.7064 |
| 0.2819 | 49.0 | 28910 | 0.9346 | 0.7324 |
| 0.247 | 50.0 | 29500 | 0.9272 | 0.7239 |
| 0.2482 | 51.0 | 30090 | 0.9673 | 0.7254 |
| 0.242 | 52.0 | 30680 | 1.0115 | 0.7217 |
| 0.2343 | 53.0 | 31270 | 0.9958 | 0.7226 |
| 0.2381 | 54.0 | 31860 | 0.9392 | 0.7263 |
| 0.2279 | 55.0 | 32450 | 0.9564 | 0.7284 |
| 0.2256 | 56.0 | 33040 | 1.0298 | 0.7239 |
| 0.2267 | 57.0 | 33630 | 1.0001 | 0.7263 |
| 0.2161 | 58.0 | 34220 | 0.9867 | 0.7248 |
| 0.214 | 59.0 | 34810 | 0.9574 | 0.7226 |
| 0.2148 | 60.0 | 35400 | 1.0306 | 0.7229 |
| 0.2128 | 61.0 | 35990 | 1.0751 | 0.7346 |
| 0.2081 | 62.0 | 36580 | 0.9656 | 0.7263 |
| 0.203 | 63.0 | 37170 | 1.0100 | 0.7263 |
| 0.204 | 64.0 | 37760 | 0.9536 | 0.7297 |
| 0.1988 | 65.0 | 38350 | 0.9686 | 0.7269 |
| 0.1976 | 66.0 | 38940 | 0.9927 | 0.7297 |
| 0.1943 | 67.0 | 39530 | 0.9987 | 0.7309 |
| 0.1941 | 68.0 | 40120 | 0.9876 | 0.7309 |
| 0.1862 | 69.0 | 40710 | 0.9646 | 0.7321 |
| 0.1986 | 70.0 | 41300 | 1.0332 | 0.7324 |
| 0.1872 | 71.0 | 41890 | 0.9861 | 0.7324 |
| 0.1898 | 72.0 | 42480 | 0.9831 | 0.7346 |
| 0.1793 | 73.0 | 43070 | 0.9901 | 0.7303 |
| 0.1843 | 74.0 | 43660 | 1.0411 | 0.7294 |
| 0.1757 | 75.0 | 44250 | 1.0355 | 0.7312 |
| 0.1814 | 76.0 | 44840 | 1.0320 | 0.7239 |
| 0.1764 | 77.0 | 45430 | 0.9895 | 0.7333 |
| 0.1779 | 78.0 | 46020 | 0.9944 | 0.7367 |
| 0.1752 | 79.0 | 46610 | 0.9581 | 0.7263 |
| 0.1734 | 80.0 | 47200 | 0.9525 | 0.7297 |
| 0.1718 | 81.0 | 47790 | 0.9693 | 0.7275 |
| 0.1722 | 82.0 | 48380 | 0.9876 | 0.7297 |
| 0.1719 | 83.0 | 48970 | 0.9838 | 0.7306 |
| 0.161 | 84.0 | 49560 | 0.9996 | 0.7281 |
| 0.1711 | 85.0 | 50150 | 0.9880 | 0.7291 |
| 0.1634 | 86.0 | 50740 | 1.0062 | 0.7306 |
| 0.1587 | 87.0 | 51330 | 1.0071 | 0.7318 |
| 0.156 | 88.0 | 51920 | 1.0271 | 0.7297 |
| 0.1574 | 89.0 | 52510 | 1.0062 | 0.7321 |
| 0.151 | 90.0 | 53100 | 0.9889 | 0.7263 |
| 0.1553 | 91.0 | 53690 | 0.9676 | 0.7324 |
| 0.1584 | 92.0 | 54280 | 0.9721 | 0.7321 |
| 0.1491 | 93.0 | 54870 | 0.9824 | 0.7349 |
| 0.1523 | 94.0 | 55460 | 0.9880 | 0.7306 |
| 0.1509 | 95.0 | 56050 | 0.9993 | 0.7327 |
| 0.1496 | 96.0 | 56640 | 0.9892 | 0.7318 |
| 0.1518 | 97.0 | 57230 | 0.9925 | 0.7339 |
| 0.149 | 98.0 | 57820 | 0.9845 | 0.7333 |
| 0.1449 | 99.0 | 58410 | 0.9832 | 0.7312 |
| 0.15 | 100.0 | 59000 | 0.9819 | 0.7303 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mariogiordano/finetuning-emotion-model
|
mariogiordano
| 2023-09-07T16:40:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T16:36:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-emotion-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-emotion-model
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1597
- Accuracy: 0.6197
- F1: 0.6118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 100 | 1.5910 | 0.4681 | 0.4490 |
| No log | 2.0 | 200 | 1.1597 | 0.6197 | 0.6118 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
zlsl/ruGPT-3.5-13B-erotic-kink-chat-lora
|
zlsl
| 2023-09-07T16:25:58Z | 26 | 8 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"rugpt",
"chat",
"lora",
"erotic",
"porn",
"text-generation",
"ru",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
text-generation
| 2023-08-24T12:12:46Z |
---
license: cc-by-nc-nd-4.0
language:
- ru
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- rugpt
- chat
- lora
- erotic
- porn
---
LoRA (rank 16, alpha 16) улучшает диалоги на кхм, пикантные темы для ruGPT-3.5-13B.
Обучается на 4-bit GPTQ модели ruGPT-3.5-13B, как будет работать на полной и 8-битной модели не проверял, на 4-х битах результат очень хороший. LoRA будет регулярно обновляться.
Датасет - input-output с контекстом, на данный момент ~1Гб
В стоп-лист добавляйте "\n", "<\/s>"
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.