modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-26 18:28:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-26 18:28:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LuisChDev/q-FrozenLake-v1-4x4-noSlippery
|
LuisChDev
| 2023-09-20T18:41:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T18:41:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="LuisChDev/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
chaholl/ppo-Huggy
|
chaholl
| 2023-09-20T18:31:26Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-20T18:31:21Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chaholl/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LucaAsga/ppo-LunarLander-v2
|
LucaAsga
| 2023-09-20T18:30:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T18:30:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.02 +/- 20.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lovegemini63/esm2_t12_35M_UR50D-finetuned-binding-classification
|
lovegemini63
| 2023-09-20T18:25:55Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"esm",
"text-classification",
"generated_from_keras_callback",
"base_model:facebook/esm2_t12_35M_UR50D",
"base_model:finetune:facebook/esm2_t12_35M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T18:25:46Z |
---
license: mit
base_model: facebook/esm2_t12_35M_UR50D
tags:
- generated_from_keras_callback
model-index:
- name: esm2_t12_35M_UR50D-finetuned-binding-classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-binding-classification
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4948
- Train Accuracy: 0.7534
- Validation Loss: 0.5392
- Validation Accuracy: 0.7253
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5196 | 0.7364 | 0.5415 | 0.7249 | 0 |
| 0.5037 | 0.7469 | 0.5433 | 0.7277 | 1 |
| 0.4948 | 0.7534 | 0.5392 | 0.7253 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/fukuyama_mai_idolmastercinderellagirls
|
CyberHarem
| 2023-09-20T18:16:25Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/fukuyama_mai_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-20T18:00:03Z |
---
license: mit
datasets:
- CyberHarem/fukuyama_mai_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of fukuyama_mai_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3600, you need to download `3600/fukuyama_mai_idolmastercinderellagirls.pt` as the embedding and `3600/fukuyama_mai_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3600**, with the score of 0.881. The trigger words are:
1. `fukuyama_mai_idolmastercinderellagirls`
2. `ponytail, black_hair, smile, long_hair, blush, black_eyes, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6000 | 0.822 | [Download](6000/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](6000/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5600 | 0.859 | [Download](5600/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5600/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 5200 | 0.835 | [Download](5200/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5200/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4800 | 0.825 | [Download](4800/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4800/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4400 | 0.878 | [Download](4400/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4400/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 4000 | 0.833 | [Download](4000/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4000/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| **3600** | **0.881** | [**Download**](3600/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3600/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3200 | 0.813 | [Download](3200/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3200/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2800 | 0.793 | [Download](2800/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2800/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2400 | 0.820 | [Download](2400/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2400/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 2000 | 0.810 | [Download](2000/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2000/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1600 | 0.796 | [Download](1600/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1600/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) |  |  |
| 1200 | 0.751 | [Download](1200/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1200/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 800 | 0.494 | [Download](800/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](800/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) |  |  |
| 400 | 0.567 | [Download](400/fukuyama_mai_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](400/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) |  |  |
|
barisaydin/bge-small-en
|
barisaydin
| 2023-09-20T18:15:03Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"mteb",
"sentence transformers",
"en",
"arxiv:2309.07597",
"license:mit",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-09-20T18:14:00Z |
---
tags:
- mteb
- sentence transformers
model-index:
- name: bge-small-en
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.34328358208955
- type: ap
value: 37.59947775195661
- type: f1
value: 68.548415491933
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.04527499999999
- type: ap
value: 89.60696356772135
- type: f1
value: 93.03361469382438
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.08
- type: f1
value: 45.66249835363254
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.205999999999996
- type: map_at_10
value: 50.782000000000004
- type: map_at_100
value: 51.547
- type: map_at_1000
value: 51.554
- type: map_at_3
value: 46.515
- type: map_at_5
value: 49.296
- type: mrr_at_1
value: 35.632999999999996
- type: mrr_at_10
value: 50.958999999999996
- type: mrr_at_100
value: 51.724000000000004
- type: mrr_at_1000
value: 51.731
- type: mrr_at_3
value: 46.669
- type: mrr_at_5
value: 49.439
- type: ndcg_at_1
value: 35.205999999999996
- type: ndcg_at_10
value: 58.835
- type: ndcg_at_100
value: 62.095
- type: ndcg_at_1000
value: 62.255
- type: ndcg_at_3
value: 50.255
- type: ndcg_at_5
value: 55.296
- type: precision_at_1
value: 35.205999999999996
- type: precision_at_10
value: 8.421
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.365
- type: precision_at_5
value: 14.680000000000001
- type: recall_at_1
value: 35.205999999999996
- type: recall_at_10
value: 84.211
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.095
- type: recall_at_5
value: 73.4
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.52644476278646
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.973045724188964
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.28285314871488
- type: mrr
value: 74.52743701358659
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 80.09041909160327
- type: cos_sim_spearman
value: 79.96266537706944
- type: euclidean_pearson
value: 79.50774978162241
- type: euclidean_spearman
value: 79.9144715078551
- type: manhattan_pearson
value: 79.2062139879302
- type: manhattan_spearman
value: 79.35000081468212
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.31493506493506
- type: f1
value: 85.2704557977762
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.6837242810816
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.38881249555897
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.884999999999998
- type: map_at_10
value: 39.574
- type: map_at_100
value: 40.993
- type: map_at_1000
value: 41.129
- type: map_at_3
value: 36.089
- type: map_at_5
value: 38.191
- type: mrr_at_1
value: 34.477999999999994
- type: mrr_at_10
value: 45.411
- type: mrr_at_100
value: 46.089999999999996
- type: mrr_at_1000
value: 46.147
- type: mrr_at_3
value: 42.346000000000004
- type: mrr_at_5
value: 44.292
- type: ndcg_at_1
value: 34.477999999999994
- type: ndcg_at_10
value: 46.123999999999995
- type: ndcg_at_100
value: 51.349999999999994
- type: ndcg_at_1000
value: 53.578
- type: ndcg_at_3
value: 40.824
- type: ndcg_at_5
value: 43.571
- type: precision_at_1
value: 34.477999999999994
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 19.742
- type: precision_at_5
value: 14.421000000000001
- type: recall_at_1
value: 27.884999999999998
- type: recall_at_10
value: 59.087
- type: recall_at_100
value: 80.609
- type: recall_at_1000
value: 95.054
- type: recall_at_3
value: 44.082
- type: recall_at_5
value: 51.593999999999994
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.639
- type: map_at_10
value: 40.047
- type: map_at_100
value: 41.302
- type: map_at_1000
value: 41.425
- type: map_at_3
value: 37.406
- type: map_at_5
value: 38.934000000000005
- type: mrr_at_1
value: 37.707
- type: mrr_at_10
value: 46.082
- type: mrr_at_100
value: 46.745
- type: mrr_at_1000
value: 46.786
- type: mrr_at_3
value: 43.980999999999995
- type: mrr_at_5
value: 45.287
- type: ndcg_at_1
value: 37.707
- type: ndcg_at_10
value: 45.525
- type: ndcg_at_100
value: 49.976
- type: ndcg_at_1000
value: 51.94499999999999
- type: ndcg_at_3
value: 41.704
- type: ndcg_at_5
value: 43.596000000000004
- type: precision_at_1
value: 37.707
- type: precision_at_10
value: 8.465
- type: precision_at_100
value: 1.375
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 19.979
- type: precision_at_5
value: 14.115
- type: recall_at_1
value: 30.639
- type: recall_at_10
value: 54.775
- type: recall_at_100
value: 73.678
- type: recall_at_1000
value: 86.142
- type: recall_at_3
value: 43.230000000000004
- type: recall_at_5
value: 48.622
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.038
- type: map_at_10
value: 49.922
- type: map_at_100
value: 51.032
- type: map_at_1000
value: 51.085
- type: map_at_3
value: 46.664
- type: map_at_5
value: 48.588
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 53.566
- type: mrr_at_100
value: 54.318999999999996
- type: mrr_at_1000
value: 54.348
- type: mrr_at_3
value: 51.066
- type: mrr_at_5
value: 52.649
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 55.676
- type: ndcg_at_100
value: 60.126000000000005
- type: ndcg_at_1000
value: 61.208
- type: ndcg_at_3
value: 50.20400000000001
- type: ndcg_at_5
value: 53.038
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 8.953
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.256999999999998
- type: precision_at_5
value: 15.524
- type: recall_at_1
value: 38.038
- type: recall_at_10
value: 69.15
- type: recall_at_100
value: 88.31599999999999
- type: recall_at_1000
value: 95.993
- type: recall_at_3
value: 54.663
- type: recall_at_5
value: 61.373
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.872
- type: map_at_10
value: 32.912
- type: map_at_100
value: 33.972
- type: map_at_1000
value: 34.046
- type: map_at_3
value: 30.361
- type: map_at_5
value: 31.704
- type: mrr_at_1
value: 26.779999999999998
- type: mrr_at_10
value: 34.812
- type: mrr_at_100
value: 35.754999999999995
- type: mrr_at_1000
value: 35.809000000000005
- type: mrr_at_3
value: 32.335
- type: mrr_at_5
value: 33.64
- type: ndcg_at_1
value: 26.779999999999998
- type: ndcg_at_10
value: 37.623
- type: ndcg_at_100
value: 42.924
- type: ndcg_at_1000
value: 44.856
- type: ndcg_at_3
value: 32.574
- type: ndcg_at_5
value: 34.842
- type: precision_at_1
value: 26.779999999999998
- type: precision_at_10
value: 5.729
- type: precision_at_100
value: 0.886
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.559
- type: precision_at_5
value: 9.469
- type: recall_at_1
value: 24.872
- type: recall_at_10
value: 50.400999999999996
- type: recall_at_100
value: 74.954
- type: recall_at_1000
value: 89.56
- type: recall_at_3
value: 36.726
- type: recall_at_5
value: 42.138999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.803
- type: map_at_10
value: 24.348
- type: map_at_100
value: 25.56
- type: map_at_1000
value: 25.668000000000003
- type: map_at_3
value: 21.811
- type: map_at_5
value: 23.287
- type: mrr_at_1
value: 20.771
- type: mrr_at_10
value: 28.961
- type: mrr_at_100
value: 29.979
- type: mrr_at_1000
value: 30.046
- type: mrr_at_3
value: 26.555
- type: mrr_at_5
value: 28.060000000000002
- type: ndcg_at_1
value: 20.771
- type: ndcg_at_10
value: 29.335
- type: ndcg_at_100
value: 35.188
- type: ndcg_at_1000
value: 37.812
- type: ndcg_at_3
value: 24.83
- type: ndcg_at_5
value: 27.119
- type: precision_at_1
value: 20.771
- type: precision_at_10
value: 5.4350000000000005
- type: precision_at_100
value: 0.9480000000000001
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.982
- type: precision_at_5
value: 8.831
- type: recall_at_1
value: 16.803
- type: recall_at_10
value: 40.039
- type: recall_at_100
value: 65.83200000000001
- type: recall_at_1000
value: 84.478
- type: recall_at_3
value: 27.682000000000002
- type: recall_at_5
value: 33.535
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.345
- type: map_at_10
value: 37.757000000000005
- type: map_at_100
value: 39.141
- type: map_at_1000
value: 39.262
- type: map_at_3
value: 35.183
- type: map_at_5
value: 36.592
- type: mrr_at_1
value: 34.649
- type: mrr_at_10
value: 43.586999999999996
- type: mrr_at_100
value: 44.481
- type: mrr_at_1000
value: 44.542
- type: mrr_at_3
value: 41.29
- type: mrr_at_5
value: 42.642
- type: ndcg_at_1
value: 34.649
- type: ndcg_at_10
value: 43.161
- type: ndcg_at_100
value: 48.734
- type: ndcg_at_1000
value: 51.046
- type: ndcg_at_3
value: 39.118
- type: ndcg_at_5
value: 41.022
- type: precision_at_1
value: 34.649
- type: precision_at_10
value: 7.603
- type: precision_at_100
value: 1.209
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 18.319
- type: precision_at_5
value: 12.839
- type: recall_at_1
value: 28.345
- type: recall_at_10
value: 53.367
- type: recall_at_100
value: 76.453
- type: recall_at_1000
value: 91.82000000000001
- type: recall_at_3
value: 41.636
- type: recall_at_5
value: 46.760000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.419
- type: map_at_10
value: 31.716
- type: map_at_100
value: 33.152
- type: map_at_1000
value: 33.267
- type: map_at_3
value: 28.74
- type: map_at_5
value: 30.48
- type: mrr_at_1
value: 28.310999999999996
- type: mrr_at_10
value: 37.039
- type: mrr_at_100
value: 38.09
- type: mrr_at_1000
value: 38.145
- type: mrr_at_3
value: 34.437
- type: mrr_at_5
value: 36.024
- type: ndcg_at_1
value: 28.310999999999996
- type: ndcg_at_10
value: 37.41
- type: ndcg_at_100
value: 43.647999999999996
- type: ndcg_at_1000
value: 46.007
- type: ndcg_at_3
value: 32.509
- type: ndcg_at_5
value: 34.943999999999996
- type: precision_at_1
value: 28.310999999999996
- type: precision_at_10
value: 6.963
- type: precision_at_100
value: 1.1860000000000002
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 15.867999999999999
- type: precision_at_5
value: 11.507000000000001
- type: recall_at_1
value: 22.419
- type: recall_at_10
value: 49.28
- type: recall_at_100
value: 75.802
- type: recall_at_1000
value: 92.032
- type: recall_at_3
value: 35.399
- type: recall_at_5
value: 42.027
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.669249999999998
- type: map_at_10
value: 33.332583333333325
- type: map_at_100
value: 34.557833333333335
- type: map_at_1000
value: 34.67141666666666
- type: map_at_3
value: 30.663166666666662
- type: map_at_5
value: 32.14883333333333
- type: mrr_at_1
value: 29.193833333333334
- type: mrr_at_10
value: 37.47625
- type: mrr_at_100
value: 38.3545
- type: mrr_at_1000
value: 38.413166666666676
- type: mrr_at_3
value: 35.06741666666667
- type: mrr_at_5
value: 36.450666666666656
- type: ndcg_at_1
value: 29.193833333333334
- type: ndcg_at_10
value: 38.505416666666676
- type: ndcg_at_100
value: 43.81125
- type: ndcg_at_1000
value: 46.09558333333333
- type: ndcg_at_3
value: 33.90916666666667
- type: ndcg_at_5
value: 36.07666666666666
- type: precision_at_1
value: 29.193833333333334
- type: precision_at_10
value: 6.7251666666666665
- type: precision_at_100
value: 1.1058333333333332
- type: precision_at_1000
value: 0.14833333333333332
- type: precision_at_3
value: 15.554166666666665
- type: precision_at_5
value: 11.079250000000002
- type: recall_at_1
value: 24.669249999999998
- type: recall_at_10
value: 49.75583333333332
- type: recall_at_100
value: 73.06908333333332
- type: recall_at_1000
value: 88.91316666666667
- type: recall_at_3
value: 36.913250000000005
- type: recall_at_5
value: 42.48641666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.044999999999998
- type: map_at_10
value: 30.349999999999998
- type: map_at_100
value: 31.273
- type: map_at_1000
value: 31.362000000000002
- type: map_at_3
value: 28.508
- type: map_at_5
value: 29.369
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.12
- type: mrr_at_100
value: 33.904
- type: mrr_at_1000
value: 33.967000000000006
- type: mrr_at_3
value: 31.365
- type: mrr_at_5
value: 32.124
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.214
- type: ndcg_at_100
value: 38.681
- type: ndcg_at_1000
value: 40.926
- type: ndcg_at_3
value: 30.725
- type: ndcg_at_5
value: 31.967000000000002
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.215
- type: precision_at_100
value: 0.807
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 8.712
- type: recall_at_1
value: 24.044999999999998
- type: recall_at_10
value: 43.456
- type: recall_at_100
value: 63.675000000000004
- type: recall_at_1000
value: 80.05499999999999
- type: recall_at_3
value: 33.561
- type: recall_at_5
value: 36.767
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.672
- type: map_at_10
value: 22.641
- type: map_at_100
value: 23.75
- type: map_at_1000
value: 23.877000000000002
- type: map_at_3
value: 20.219
- type: map_at_5
value: 21.648
- type: mrr_at_1
value: 18.823
- type: mrr_at_10
value: 26.101999999999997
- type: mrr_at_100
value: 27.038
- type: mrr_at_1000
value: 27.118
- type: mrr_at_3
value: 23.669
- type: mrr_at_5
value: 25.173000000000002
- type: ndcg_at_1
value: 18.823
- type: ndcg_at_10
value: 27.176000000000002
- type: ndcg_at_100
value: 32.42
- type: ndcg_at_1000
value: 35.413
- type: ndcg_at_3
value: 22.756999999999998
- type: ndcg_at_5
value: 25.032
- type: precision_at_1
value: 18.823
- type: precision_at_10
value: 5.034000000000001
- type: precision_at_100
value: 0.895
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 10.771
- type: precision_at_5
value: 8.1
- type: recall_at_1
value: 15.672
- type: recall_at_10
value: 37.296
- type: recall_at_100
value: 60.863
- type: recall_at_1000
value: 82.234
- type: recall_at_3
value: 25.330000000000002
- type: recall_at_5
value: 30.964000000000002
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.633
- type: map_at_10
value: 32.858
- type: map_at_100
value: 34.038000000000004
- type: map_at_1000
value: 34.141
- type: map_at_3
value: 30.209000000000003
- type: map_at_5
value: 31.567
- type: mrr_at_1
value: 28.358
- type: mrr_at_10
value: 36.433
- type: mrr_at_100
value: 37.352000000000004
- type: mrr_at_1000
value: 37.41
- type: mrr_at_3
value: 34.033
- type: mrr_at_5
value: 35.246
- type: ndcg_at_1
value: 28.358
- type: ndcg_at_10
value: 37.973
- type: ndcg_at_100
value: 43.411
- type: ndcg_at_1000
value: 45.747
- type: ndcg_at_3
value: 32.934999999999995
- type: ndcg_at_5
value: 35.013
- type: precision_at_1
value: 28.358
- type: precision_at_10
value: 6.418
- type: precision_at_100
value: 1.02
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.677000000000001
- type: precision_at_5
value: 10.335999999999999
- type: recall_at_1
value: 24.633
- type: recall_at_10
value: 50.048
- type: recall_at_100
value: 73.821
- type: recall_at_1000
value: 90.046
- type: recall_at_3
value: 36.284
- type: recall_at_5
value: 41.370000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.133
- type: map_at_10
value: 31.491999999999997
- type: map_at_100
value: 33.062000000000005
- type: map_at_1000
value: 33.256
- type: map_at_3
value: 28.886
- type: map_at_5
value: 30.262
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.144
- type: mrr_at_100
value: 37.14
- type: mrr_at_1000
value: 37.191
- type: mrr_at_3
value: 33.762
- type: mrr_at_5
value: 34.997
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 36.951
- type: ndcg_at_100
value: 43.287
- type: ndcg_at_1000
value: 45.777
- type: ndcg_at_3
value: 32.786
- type: ndcg_at_5
value: 34.65
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.055
- type: precision_at_100
value: 1.476
- type: precision_at_1000
value: 0.22899999999999998
- type: precision_at_3
value: 15.481
- type: precision_at_5
value: 11.186
- type: recall_at_1
value: 23.133
- type: recall_at_10
value: 47.285
- type: recall_at_100
value: 76.176
- type: recall_at_1000
value: 92.176
- type: recall_at_3
value: 35.223
- type: recall_at_5
value: 40.142
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.547
- type: map_at_10
value: 26.374
- type: map_at_100
value: 27.419
- type: map_at_1000
value: 27.539
- type: map_at_3
value: 23.882
- type: map_at_5
value: 25.163999999999998
- type: mrr_at_1
value: 21.442
- type: mrr_at_10
value: 28.458
- type: mrr_at_100
value: 29.360999999999997
- type: mrr_at_1000
value: 29.448999999999998
- type: mrr_at_3
value: 25.97
- type: mrr_at_5
value: 27.273999999999997
- type: ndcg_at_1
value: 21.442
- type: ndcg_at_10
value: 30.897000000000002
- type: ndcg_at_100
value: 35.99
- type: ndcg_at_1000
value: 38.832
- type: ndcg_at_3
value: 25.944
- type: ndcg_at_5
value: 28.126
- type: precision_at_1
value: 21.442
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8109999999999999
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.911
- type: recall_at_1
value: 19.547
- type: recall_at_10
value: 42.886
- type: recall_at_100
value: 66.64999999999999
- type: recall_at_1000
value: 87.368
- type: recall_at_3
value: 29.143
- type: recall_at_5
value: 34.544000000000004
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.572
- type: map_at_10
value: 25.312
- type: map_at_100
value: 27.062
- type: map_at_1000
value: 27.253
- type: map_at_3
value: 21.601
- type: map_at_5
value: 23.473
- type: mrr_at_1
value: 34.984
- type: mrr_at_10
value: 46.406
- type: mrr_at_100
value: 47.179
- type: mrr_at_1000
value: 47.21
- type: mrr_at_3
value: 43.485
- type: mrr_at_5
value: 45.322
- type: ndcg_at_1
value: 34.984
- type: ndcg_at_10
value: 34.344
- type: ndcg_at_100
value: 41.015
- type: ndcg_at_1000
value: 44.366
- type: ndcg_at_3
value: 29.119
- type: ndcg_at_5
value: 30.825999999999997
- type: precision_at_1
value: 34.984
- type: precision_at_10
value: 10.358
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 21.368000000000002
- type: precision_at_5
value: 15.948
- type: recall_at_1
value: 15.572
- type: recall_at_10
value: 39.367999999999995
- type: recall_at_100
value: 62.183
- type: recall_at_1000
value: 80.92200000000001
- type: recall_at_3
value: 26.131999999999998
- type: recall_at_5
value: 31.635999999999996
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.848
- type: map_at_10
value: 19.25
- type: map_at_100
value: 27.193
- type: map_at_1000
value: 28.721999999999998
- type: map_at_3
value: 13.968
- type: map_at_5
value: 16.283
- type: mrr_at_1
value: 68.75
- type: mrr_at_10
value: 76.25
- type: mrr_at_100
value: 76.534
- type: mrr_at_1000
value: 76.53999999999999
- type: mrr_at_3
value: 74.667
- type: mrr_at_5
value: 75.86699999999999
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 41.426
- type: ndcg_at_100
value: 45.660000000000004
- type: ndcg_at_1000
value: 53.02
- type: ndcg_at_3
value: 46.581
- type: ndcg_at_5
value: 43.836999999999996
- type: precision_at_1
value: 68.75
- type: precision_at_10
value: 32.800000000000004
- type: precision_at_100
value: 10.440000000000001
- type: precision_at_1000
value: 1.9980000000000002
- type: precision_at_3
value: 49.667
- type: precision_at_5
value: 42.25
- type: recall_at_1
value: 8.848
- type: recall_at_10
value: 24.467
- type: recall_at_100
value: 51.344
- type: recall_at_1000
value: 75.235
- type: recall_at_3
value: 15.329
- type: recall_at_5
value: 18.892999999999997
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.95
- type: f1
value: 43.44563593360779
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 78.036
- type: map_at_10
value: 85.639
- type: map_at_100
value: 85.815
- type: map_at_1000
value: 85.829
- type: map_at_3
value: 84.795
- type: map_at_5
value: 85.336
- type: mrr_at_1
value: 84.353
- type: mrr_at_10
value: 90.582
- type: mrr_at_100
value: 90.617
- type: mrr_at_1000
value: 90.617
- type: mrr_at_3
value: 90.132
- type: mrr_at_5
value: 90.447
- type: ndcg_at_1
value: 84.353
- type: ndcg_at_10
value: 89.003
- type: ndcg_at_100
value: 89.60000000000001
- type: ndcg_at_1000
value: 89.836
- type: ndcg_at_3
value: 87.81400000000001
- type: ndcg_at_5
value: 88.478
- type: precision_at_1
value: 84.353
- type: precision_at_10
value: 10.482
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 33.257999999999996
- type: precision_at_5
value: 20.465
- type: recall_at_1
value: 78.036
- type: recall_at_10
value: 94.517
- type: recall_at_100
value: 96.828
- type: recall_at_1000
value: 98.261
- type: recall_at_3
value: 91.12
- type: recall_at_5
value: 92.946
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.191
- type: map_at_10
value: 32.369
- type: map_at_100
value: 34.123999999999995
- type: map_at_1000
value: 34.317
- type: map_at_3
value: 28.71
- type: map_at_5
value: 30.607
- type: mrr_at_1
value: 40.894999999999996
- type: mrr_at_10
value: 48.842
- type: mrr_at_100
value: 49.599
- type: mrr_at_1000
value: 49.647000000000006
- type: mrr_at_3
value: 46.785
- type: mrr_at_5
value: 47.672
- type: ndcg_at_1
value: 40.894999999999996
- type: ndcg_at_10
value: 39.872
- type: ndcg_at_100
value: 46.126
- type: ndcg_at_1000
value: 49.476
- type: ndcg_at_3
value: 37.153000000000006
- type: ndcg_at_5
value: 37.433
- type: precision_at_1
value: 40.894999999999996
- type: precision_at_10
value: 10.818
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 25.051000000000002
- type: precision_at_5
value: 17.531
- type: recall_at_1
value: 20.191
- type: recall_at_10
value: 45.768
- type: recall_at_100
value: 68.82000000000001
- type: recall_at_1000
value: 89.133
- type: recall_at_3
value: 33.296
- type: recall_at_5
value: 38.022
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.257
- type: map_at_10
value: 61.467000000000006
- type: map_at_100
value: 62.364
- type: map_at_1000
value: 62.424
- type: map_at_3
value: 58.228
- type: map_at_5
value: 60.283
- type: mrr_at_1
value: 78.515
- type: mrr_at_10
value: 84.191
- type: mrr_at_100
value: 84.378
- type: mrr_at_1000
value: 84.385
- type: mrr_at_3
value: 83.284
- type: mrr_at_5
value: 83.856
- type: ndcg_at_1
value: 78.515
- type: ndcg_at_10
value: 69.78999999999999
- type: ndcg_at_100
value: 72.886
- type: ndcg_at_1000
value: 74.015
- type: ndcg_at_3
value: 65.23
- type: ndcg_at_5
value: 67.80199999999999
- type: precision_at_1
value: 78.515
- type: precision_at_10
value: 14.519000000000002
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.702
- type: precision_at_5
value: 27.046999999999997
- type: recall_at_1
value: 39.257
- type: recall_at_10
value: 72.59299999999999
- type: recall_at_100
value: 84.679
- type: recall_at_1000
value: 92.12
- type: recall_at_3
value: 62.552
- type: recall_at_5
value: 67.616
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.5152
- type: ap
value: 87.64584669595709
- type: f1
value: 91.50605576428437
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.926000000000002
- type: map_at_10
value: 34.049
- type: map_at_100
value: 35.213
- type: map_at_1000
value: 35.265
- type: map_at_3
value: 30.309
- type: map_at_5
value: 32.407000000000004
- type: mrr_at_1
value: 22.55
- type: mrr_at_10
value: 34.657
- type: mrr_at_100
value: 35.760999999999996
- type: mrr_at_1000
value: 35.807
- type: mrr_at_3
value: 30.989
- type: mrr_at_5
value: 33.039
- type: ndcg_at_1
value: 22.55
- type: ndcg_at_10
value: 40.842
- type: ndcg_at_100
value: 46.436
- type: ndcg_at_1000
value: 47.721999999999994
- type: ndcg_at_3
value: 33.209
- type: ndcg_at_5
value: 36.943
- type: precision_at_1
value: 22.55
- type: precision_at_10
value: 6.447
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.136000000000001
- type: precision_at_5
value: 10.381
- type: recall_at_1
value: 21.926000000000002
- type: recall_at_10
value: 61.724999999999994
- type: recall_at_100
value: 87.604
- type: recall_at_1000
value: 97.421
- type: recall_at_3
value: 40.944
- type: recall_at_5
value: 49.915
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.54765161878704
- type: f1
value: 93.3298945415573
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.71591427268582
- type: f1
value: 59.32113870474471
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.83053127101547
- type: f1
value: 73.60757944876475
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.72562205783457
- type: f1
value: 78.63761662505502
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.37935633767996
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.55270546130387
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.462692753143834
- type: mrr
value: 31.497569753511563
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.646
- type: map_at_10
value: 12.498
- type: map_at_100
value: 15.486
- type: map_at_1000
value: 16.805999999999997
- type: map_at_3
value: 9.325
- type: map_at_5
value: 10.751
- type: mrr_at_1
value: 43.034
- type: mrr_at_10
value: 52.662
- type: mrr_at_100
value: 53.189
- type: mrr_at_1000
value: 53.25
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 51.92
- type: ndcg_at_1
value: 41.796
- type: ndcg_at_10
value: 33.477000000000004
- type: ndcg_at_100
value: 29.996000000000002
- type: ndcg_at_1000
value: 38.864
- type: ndcg_at_3
value: 38.940000000000005
- type: ndcg_at_5
value: 36.689
- type: precision_at_1
value: 43.034
- type: precision_at_10
value: 24.799
- type: precision_at_100
value: 7.432999999999999
- type: precision_at_1000
value: 1.9929999999999999
- type: precision_at_3
value: 36.842000000000006
- type: precision_at_5
value: 32.135999999999996
- type: recall_at_1
value: 5.646
- type: recall_at_10
value: 15.963
- type: recall_at_100
value: 29.492
- type: recall_at_1000
value: 61.711000000000006
- type: recall_at_3
value: 10.585
- type: recall_at_5
value: 12.753999999999998
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.602
- type: map_at_10
value: 41.545
- type: map_at_100
value: 42.644999999999996
- type: map_at_1000
value: 42.685
- type: map_at_3
value: 37.261
- type: map_at_5
value: 39.706
- type: mrr_at_1
value: 31.141000000000002
- type: mrr_at_10
value: 44.139
- type: mrr_at_100
value: 44.997
- type: mrr_at_1000
value: 45.025999999999996
- type: mrr_at_3
value: 40.503
- type: mrr_at_5
value: 42.64
- type: ndcg_at_1
value: 31.141000000000002
- type: ndcg_at_10
value: 48.995
- type: ndcg_at_100
value: 53.788000000000004
- type: ndcg_at_1000
value: 54.730000000000004
- type: ndcg_at_3
value: 40.844
- type: ndcg_at_5
value: 44.955
- type: precision_at_1
value: 31.141000000000002
- type: precision_at_10
value: 8.233
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 18.579
- type: precision_at_5
value: 13.533999999999999
- type: recall_at_1
value: 27.602
- type: recall_at_10
value: 69.216
- type: recall_at_100
value: 90.252
- type: recall_at_1000
value: 97.27
- type: recall_at_3
value: 47.987
- type: recall_at_5
value: 57.438
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.949
- type: map_at_10
value: 84.89999999999999
- type: map_at_100
value: 85.531
- type: map_at_1000
value: 85.548
- type: map_at_3
value: 82.027
- type: map_at_5
value: 83.853
- type: mrr_at_1
value: 81.69999999999999
- type: mrr_at_10
value: 87.813
- type: mrr_at_100
value: 87.917
- type: mrr_at_1000
value: 87.91799999999999
- type: mrr_at_3
value: 86.938
- type: mrr_at_5
value: 87.53999999999999
- type: ndcg_at_1
value: 81.75
- type: ndcg_at_10
value: 88.55499999999999
- type: ndcg_at_100
value: 89.765
- type: ndcg_at_1000
value: 89.871
- type: ndcg_at_3
value: 85.905
- type: ndcg_at_5
value: 87.41
- type: precision_at_1
value: 81.75
- type: precision_at_10
value: 13.403
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.597
- type: precision_at_5
value: 24.69
- type: recall_at_1
value: 70.949
- type: recall_at_10
value: 95.423
- type: recall_at_100
value: 99.509
- type: recall_at_1000
value: 99.982
- type: recall_at_3
value: 87.717
- type: recall_at_5
value: 92.032
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 51.76962893449579
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.32897690686379
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.478
- type: map_at_10
value: 11.994
- type: map_at_100
value: 13.977
- type: map_at_1000
value: 14.295
- type: map_at_3
value: 8.408999999999999
- type: map_at_5
value: 10.024
- type: mrr_at_1
value: 22.1
- type: mrr_at_10
value: 33.526
- type: mrr_at_100
value: 34.577000000000005
- type: mrr_at_1000
value: 34.632000000000005
- type: mrr_at_3
value: 30.217
- type: mrr_at_5
value: 31.962000000000003
- type: ndcg_at_1
value: 22.1
- type: ndcg_at_10
value: 20.191
- type: ndcg_at_100
value: 27.954
- type: ndcg_at_1000
value: 33.491
- type: ndcg_at_3
value: 18.787000000000003
- type: ndcg_at_5
value: 16.378999999999998
- type: precision_at_1
value: 22.1
- type: precision_at_10
value: 10.69
- type: precision_at_100
value: 2.1919999999999997
- type: precision_at_1000
value: 0.35200000000000004
- type: precision_at_3
value: 17.732999999999997
- type: precision_at_5
value: 14.499999999999998
- type: recall_at_1
value: 4.478
- type: recall_at_10
value: 21.657
- type: recall_at_100
value: 44.54
- type: recall_at_1000
value: 71.542
- type: recall_at_3
value: 10.778
- type: recall_at_5
value: 14.687
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.82325259156718
- type: cos_sim_spearman
value: 79.2463589100662
- type: euclidean_pearson
value: 80.48318380496771
- type: euclidean_spearman
value: 79.34451935199979
- type: manhattan_pearson
value: 80.39041824178759
- type: manhattan_spearman
value: 79.23002892700211
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.74130231431258
- type: cos_sim_spearman
value: 78.36856568042397
- type: euclidean_pearson
value: 82.48301631890303
- type: euclidean_spearman
value: 78.28376980722732
- type: manhattan_pearson
value: 82.43552075450525
- type: manhattan_spearman
value: 78.22702443947126
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.96138619461459
- type: cos_sim_spearman
value: 81.85436343502379
- type: euclidean_pearson
value: 81.82895226665367
- type: euclidean_spearman
value: 82.22707349602916
- type: manhattan_pearson
value: 81.66303369445873
- type: manhattan_spearman
value: 82.05030197179455
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.05481244198648
- type: cos_sim_spearman
value: 80.85052504637808
- type: euclidean_pearson
value: 80.86728419744497
- type: euclidean_spearman
value: 81.033786401512
- type: manhattan_pearson
value: 80.90107531061103
- type: manhattan_spearman
value: 81.11374116827795
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.615220756399
- type: cos_sim_spearman
value: 86.46858500002092
- type: euclidean_pearson
value: 86.08307800247586
- type: euclidean_spearman
value: 86.72691443870013
- type: manhattan_pearson
value: 85.96155594487269
- type: manhattan_spearman
value: 86.605909505275
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.14363913634436
- type: cos_sim_spearman
value: 84.48430226487102
- type: euclidean_pearson
value: 83.75303424801902
- type: euclidean_spearman
value: 84.56762380734538
- type: manhattan_pearson
value: 83.6135447165928
- type: manhattan_spearman
value: 84.39898212616731
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.09909252554525
- type: cos_sim_spearman
value: 85.70951402743276
- type: euclidean_pearson
value: 87.1991936239908
- type: euclidean_spearman
value: 86.07745840612071
- type: manhattan_pearson
value: 87.25039137549952
- type: manhattan_spearman
value: 85.99938746659761
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.529332093413615
- type: cos_sim_spearman
value: 65.38177340147439
- type: euclidean_pearson
value: 66.35278011412136
- type: euclidean_spearman
value: 65.47147267032997
- type: manhattan_pearson
value: 66.71804682408693
- type: manhattan_spearman
value: 65.67406521423597
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.45802942885662
- type: cos_sim_spearman
value: 84.8853341842566
- type: euclidean_pearson
value: 84.60915021096707
- type: euclidean_spearman
value: 85.11181242913666
- type: manhattan_pearson
value: 84.38600521210364
- type: manhattan_spearman
value: 84.89045417981723
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.92793380635129
- type: mrr
value: 95.85834191226348
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.74400000000001
- type: map_at_10
value: 65.455
- type: map_at_100
value: 66.106
- type: map_at_1000
value: 66.129
- type: map_at_3
value: 62.719
- type: map_at_5
value: 64.441
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 66.776
- type: mrr_at_100
value: 67.363
- type: mrr_at_1000
value: 67.384
- type: mrr_at_3
value: 64.889
- type: mrr_at_5
value: 66.122
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 69.904
- type: ndcg_at_100
value: 72.807
- type: ndcg_at_1000
value: 73.423
- type: ndcg_at_3
value: 65.405
- type: ndcg_at_5
value: 67.86999999999999
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 17
- type: recall_at_1
value: 55.74400000000001
- type: recall_at_10
value: 82.122
- type: recall_at_100
value: 95.167
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 70.14399999999999
- type: recall_at_5
value: 76.417
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86534653465347
- type: cos_sim_ap
value: 96.54142419791388
- type: cos_sim_f1
value: 93.07535641547861
- type: cos_sim_precision
value: 94.81327800829875
- type: cos_sim_recall
value: 91.4
- type: dot_accuracy
value: 99.86435643564356
- type: dot_ap
value: 96.53682260449868
- type: dot_f1
value: 92.98515104966718
- type: dot_precision
value: 95.27806925498426
- type: dot_recall
value: 90.8
- type: euclidean_accuracy
value: 99.86336633663366
- type: euclidean_ap
value: 96.5228676185697
- type: euclidean_f1
value: 92.9735234215886
- type: euclidean_precision
value: 94.70954356846472
- type: euclidean_recall
value: 91.3
- type: manhattan_accuracy
value: 99.85841584158416
- type: manhattan_ap
value: 96.50392760934032
- type: manhattan_f1
value: 92.84642321160581
- type: manhattan_precision
value: 92.8928928928929
- type: manhattan_recall
value: 92.80000000000001
- type: max_accuracy
value: 99.86534653465347
- type: max_ap
value: 96.54142419791388
- type: max_f1
value: 93.07535641547861
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 61.08285408766616
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.640675309010604
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.20333913710715
- type: mrr
value: 54.088813555725324
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.79465221925075
- type: cos_sim_spearman
value: 30.530816059163634
- type: dot_pearson
value: 31.364837244718043
- type: dot_spearman
value: 30.79726823684003
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.735
- type: map_at_100
value: 8.978
- type: map_at_1000
value: 20.851
- type: map_at_3
value: 0.613
- type: map_at_5
value: 0.964
- type: mrr_at_1
value: 88
- type: mrr_at_10
value: 92.867
- type: mrr_at_100
value: 92.867
- type: mrr_at_1000
value: 92.867
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 92.667
- type: ndcg_at_1
value: 82
- type: ndcg_at_10
value: 73.164
- type: ndcg_at_100
value: 51.878
- type: ndcg_at_1000
value: 44.864
- type: ndcg_at_3
value: 79.184
- type: ndcg_at_5
value: 76.39
- type: precision_at_1
value: 88
- type: precision_at_10
value: 76.2
- type: precision_at_100
value: 52.459999999999994
- type: precision_at_1000
value: 19.692
- type: precision_at_3
value: 82.667
- type: precision_at_5
value: 80
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 1.942
- type: recall_at_100
value: 12.342
- type: recall_at_1000
value: 41.42
- type: recall_at_3
value: 0.637
- type: recall_at_5
value: 1.034
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.567
- type: map_at_10
value: 13.116
- type: map_at_100
value: 19.39
- type: map_at_1000
value: 20.988
- type: map_at_3
value: 7.109
- type: map_at_5
value: 9.950000000000001
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 57.404999999999994
- type: mrr_at_100
value: 58.021
- type: mrr_at_1000
value: 58.021
- type: mrr_at_3
value: 54.762
- type: mrr_at_5
value: 56.19
- type: ndcg_at_1
value: 38.775999999999996
- type: ndcg_at_10
value: 30.359
- type: ndcg_at_100
value: 41.284
- type: ndcg_at_1000
value: 52.30200000000001
- type: ndcg_at_3
value: 36.744
- type: ndcg_at_5
value: 34.326
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 26.122
- type: precision_at_100
value: 8.082
- type: precision_at_1000
value: 1.559
- type: precision_at_3
value: 40.136
- type: precision_at_5
value: 35.510000000000005
- type: recall_at_1
value: 3.567
- type: recall_at_10
value: 19.045
- type: recall_at_100
value: 49.979
- type: recall_at_1000
value: 84.206
- type: recall_at_3
value: 8.52
- type: recall_at_5
value: 13.103000000000002
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 68.8394
- type: ap
value: 13.454399712443099
- type: f1
value: 53.04963076364322
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.546123372948514
- type: f1
value: 60.86952793277713
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.10042955060234
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.03308100375514
- type: cos_sim_ap
value: 71.08284605869684
- type: cos_sim_f1
value: 65.42539436255494
- type: cos_sim_precision
value: 64.14807302231237
- type: cos_sim_recall
value: 66.75461741424802
- type: dot_accuracy
value: 84.68736961316088
- type: dot_ap
value: 69.20524036530992
- type: dot_f1
value: 63.54893953365829
- type: dot_precision
value: 63.45698500394633
- type: dot_recall
value: 63.641160949868066
- type: euclidean_accuracy
value: 85.07480479227513
- type: euclidean_ap
value: 71.14592761009864
- type: euclidean_f1
value: 65.43814432989691
- type: euclidean_precision
value: 63.95465994962216
- type: euclidean_recall
value: 66.99208443271768
- type: manhattan_accuracy
value: 85.06288370984085
- type: manhattan_ap
value: 71.07289742593868
- type: manhattan_f1
value: 65.37585421412301
- type: manhattan_precision
value: 62.816147859922175
- type: manhattan_recall
value: 68.15303430079156
- type: max_accuracy
value: 85.07480479227513
- type: max_ap
value: 71.14592761009864
- type: max_f1
value: 65.43814432989691
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.79058485659952
- type: cos_sim_ap
value: 83.7183187008759
- type: cos_sim_f1
value: 75.86921142180798
- type: cos_sim_precision
value: 73.00683371298405
- type: cos_sim_recall
value: 78.96519864490298
- type: dot_accuracy
value: 87.0085768618776
- type: dot_ap
value: 81.87467488474279
- type: dot_f1
value: 74.04188363990559
- type: dot_precision
value: 72.10507114191901
- type: dot_recall
value: 76.08561749307053
- type: euclidean_accuracy
value: 87.8332751193387
- type: euclidean_ap
value: 83.83585648120315
- type: euclidean_f1
value: 76.02582177042369
- type: euclidean_precision
value: 73.36388371759989
- type: euclidean_recall
value: 78.88820449645827
- type: manhattan_accuracy
value: 87.87208444910156
- type: manhattan_ap
value: 83.8101950642973
- type: manhattan_f1
value: 75.90454195535027
- type: manhattan_precision
value: 72.44419564761039
- type: manhattan_recall
value: 79.71204188481676
- type: max_accuracy
value: 87.87208444910156
- type: max_ap
value: 83.83585648120315
- type: max_f1
value: 76.02582177042369
license: mit
language:
- en
---
**Recommend switching to newest [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5), which has more reasonable similarity distribution and same method of usage.**
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 09/15/2023: Release [paper](https://arxiv.org/pdf/2309.07597.pdf) and [dataset](https://data.baai.ac.cn/details/BAAI-MTP).
- 09/12/2023: New Release:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval\* |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
\*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
\**: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details pelease refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn).
## Citation
If you find our work helpful, please cite us:
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
EnDevSols/falcon-7b-chat-icd
|
EnDevSols
| 2023-09-20T17:55:34Z | 5 | 1 |
peft
|
[
"peft",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2023-07-09T07:12:19Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
setty45/EfficientNet
|
setty45
| 2023-09-20T17:44:55Z | 0 | 0 | null |
[
"image-classification",
"license:afl-3.0",
"region:us"
] |
image-classification
| 2023-09-20T17:44:04Z |
---
license: afl-3.0
pipeline_tag: image-classification
---
|
Puzer/gte_large__pnp_loss
|
Puzer
| 2023-09-20T17:42:43Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T17:41:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
nhmnhat1997/condenser-phobert-biencoder
|
nhmnhat1997
| 2023-09-20T17:22:09Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-09-20T17:08:58Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
conorhastings/groupchat
|
conorhastings
| 2023-09-20T17:20:23Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-21T18:39:47Z |
---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
mythrex/dqn-SpaceInvadersNoFrameskip-v4
|
mythrex
| 2023-09-20T17:01:53Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T17:01:24Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mythrex -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mythrex -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mythrex
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
lizhouf/mpox_50_topics_modeling
|
lizhouf
| 2023-09-20T16:53:10Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-09-20T16:45:30Z |
---
license: cc-by-nc-nd-4.0
---
This is the topic modeling model trained for the paper Characterizing Topical and Network Dynamics of Online Toxicity During the 2022 Mpox Outbreak.
To load the model, please refer to [BERTopic guidelines](https://maartengr.github.io/BERTopic/index.html).
In particular, you can use:
```python
topic_model = BERTopic.load("Monkeypox_tweets_toxic_50_v5")
```
|
Shocker444/ppo-Huggy
|
Shocker444
| 2023-09-20T16:42:32Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-20T16:42:26Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Shocker444/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LarryAIDraw/beatrixamerhauserv3-lora-nochekaiser
|
LarryAIDraw
| 2023-09-20T16:40:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-20T16:33:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/110331/beatrix-amerhauser-bucket-list-of-the-dead
|
LarryAIDraw/nakano_itsuki_v1
|
LarryAIDraw
| 2023-09-20T16:40:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-20T16:33:07Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/148571/nakano-itsuki-5-toubun-no-hanayome
|
ohtaman/falcon-7b-kokkai2022-lora
|
ohtaman
| 2023-09-20T16:36:19Z | 12 | 0 |
peft
|
[
"peft",
"text-generation",
"ja",
"dataset:ohtaman/kokkai2022",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-14T20:05:55Z |
---
language:
- ja
license: apache-2.0
library_name: peft
datasets:
- ohtaman/kokkai2022
pipeline_tag: text-generation
inference: false
base_model: tiiuae/falcon-7b
---
<a href="https://colab.research.google.com/drive/1oWHM5_DbltvrD27oZL4-fumXChkMkrC5?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab">
</a>
This model learned the proceedings of the Japanese parliament in 2022.
The [dataset](https://huggingface.co/datasets/ohtaman/kokkai2022) is collected using
[National Diet Library's Search API](https://kokkai.ndl.go.jp/api.html).
This model was build for a hackerthon event, [
第1回大規模言語モデル分散学習ハッカソン](https://abci.ai/event/2023/06/13/ja_event.html) ([#ABCILLM](https://twitter.com/hashtag/ABCILLM)), as an example of training which used multiple GPUs or multiple nodes.
An example input is as follows:
```
# question
麻生太郎
増税が必要とお考えでしょうか?
# answer
鈴木 俊一
```
and the respons is:
```
「財政民主主義」のためには、国庫負担を引き下げるならば、企業の賃上げを実現するためにも、消費者物価の高騰対策等を含めて、経済対策を行い、成長と分配の好循環を実珉化することが重要でございます。
今回の補正予算案においては、新型コロナウイルス感染症対応事業助成金及び電気料金支援交付金の拡充を盛り込んでおります。それ以外にも、令和四年度一般会計予備費二千五百億円を活用し、政府全体の方針を踏まえ、各種制度改革や人員削減等を通じて、様々な取組を進めてまいりますが、具体的な内容は決定しておりず、その際には、財源不足に関する議論を重ねていく中で検討してまいる所存です。
次に、賃上賃下の問題について申し上げます。
私自身は、賃金水準の向上を目指すことは大変重要であり、
```
## Training procedure
Finetune [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) with [ohtaman/kokkai2022](https://huggingface.co/datasets/ohtaman/kokkai2022)(currentry, private) dataset using LoRA with following configurations. It took 2:34:32 with 16 V-nodes (V100 x 64).
|param|value|
|:--:|:--:|
|r| 4|
|lora_alpha| 2|
|target_modules|query_key_value<br>dense<br>dense_h_to_4h<br>dense_4h_to_h|
|lora_dropout| 0.01|
|bias| None|
|task_type| CAUSAL_LM|
|optimizer|AdamW|
|lr|4e-4|
the prompt template is as follows:
```
# question
{questioner}
{question_text}
# answer
{answerer}
{answer_text}
```
### Example Code
You can try the model with [Colaboratory](https://colab.research.google.com/drive/1oWHM5_DbltvrD27oZL4-fumXChkMkrC5?usp=sharing) .
No Pro or Pro+ is needed.
The typical code to generate texts with this model is as follows:
```python
tokenizer = transformers.AutoTokenizer.from_pretrained(base_model_name, trust_remote_code=True)
base_model = transformers.AutoModelForCausalLM.from_pretrained(base_model_name, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
peft_model = peft.PeftModelForCausalLM.from_pretrained(base_model, peft_model_name, torch_dtype=torch.bfloat16)
prompt = "# question\n麻生太郎\n\n増税が必要とお考えでしょうか?\n# answer\n鈴木 俊一\n\n"
input_tokens = tokenizer(prompt, return_tensors="pt").to(peft_model.device)
input_length = input_tokens.input_ids.shape[1]
with torch.no_grad():
outputs = peft_model.generate(
input_ids=input_tokens["input_ids"],
attention_mask=input_tokens["attention_mask"],
return_dict_in_generate=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
max_length=max_length,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
)
output_tokens = outputs.sequences[0, input_length:-1]
print(tokenizer.decode(output_tokens))
```
|
flozi00/Llama-2-13b-german-assistant-v7
|
flozi00
| 2023-09-20T16:32:30Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"de",
"dataset:flozi00/conversations",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-08T16:18:17Z |
---
datasets:
- flozi00/conversations
language:
- de
---
## This project is sponsored by [  ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
# Model Card
This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:"
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.
The model archictecture is based on Llama version 2 with 13B parameters, trained on 100% renewable energy powered hardware.
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00)
Join discussions about german llm research, and plan larger training runs together: https://join.slack.com/t/slack-dtc7771/shared_invite/zt-219keplqu-hLwjm0xcFAOX7enERfBz0Q
|
linoyts/lora-xl-linoy_face-1.0-1.0-1000-1-32-prodigy
|
linoyts
| 2023-09-20T16:21:25Z | 6 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-20T15:58:19Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a TOK woman
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - LinoyTsaban/lora-xl-linoy_face-1.0-1.0-1000-1-32-prodigy
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on photo of a TOK woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
jondurbin/airoboros-l2-70b-2.2.1-peft
|
jondurbin
| 2023-09-20T16:19:32Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-09-20T16:13:56Z |
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Sanjay1234/Trillo-Project-1
|
Sanjay1234
| 2023-09-20T16:18:16Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-20T16:17:51Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Sanjay1234/Trillo-Project-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Sanjay1234/Trillo-Project-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
ShaunThayil/training-7
|
ShaunThayil
| 2023-09-20T16:15:01Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T16:14:11Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: training-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training-7
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Accuracy: 0.9878
- Precision: 0.9909
- Recall: 0.9842
- F1: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 0.5 | 131 | 0.0592 | 0.9856 | 0.9954 | 0.9752 | 0.9852 |
| No log | 1.0 | 262 | 0.0695 | 0.9789 | 0.9977 | 0.9594 | 0.9781 |
| 0.1477 | 1.49 | 393 | 0.0648 | 0.9822 | 0.9977 | 0.9661 | 0.9817 |
| 0.1477 | 1.99 | 524 | 0.0657 | 0.9833 | 0.9954 | 0.9707 | 0.9829 |
| 0.0555 | 2.49 | 655 | 0.0611 | 0.9856 | 0.9954 | 0.9752 | 0.9852 |
| 0.0555 | 2.99 | 786 | 0.0599 | 0.9889 | 0.9932 | 0.9842 | 0.9887 |
| 0.0243 | 3.49 | 917 | 0.0574 | 0.9878 | 0.9909 | 0.9842 | 0.9875 |
| 0.0243 | 3.98 | 1048 | 0.0602 | 0.9878 | 0.9909 | 0.9842 | 0.9875 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230913+cu121
- Tokenizers 0.13.3
|
Gaayu/my_awesome_model
|
Gaayu
| 2023-09-20T16:07:05Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T12:49:27Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Gaayu/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Gaayu/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2493
- Validation Loss: 0.2259
- Train Accuracy: 0.9100
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 390, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2493 | 0.2259 | 0.9100 | 0 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bedus-creation/eng-limbu-t5-manual-001
|
bedus-creation
| 2023-09-20T16:03:22Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-20T15:32:26Z |
---
license: apache-2.0
base_model: bedus-creation/eng-limbu-t5-all-001
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/eng-limbu-t5-manual-001
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/eng-limbu-t5-manual-001
This model is a fine-tuned version of [bedus-creation/eng-limbu-t5-all-001](https://huggingface.co/bedus-creation/eng-limbu-t5-all-001) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8893
- Validation Loss: 3.6600
- Epoch: 69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2750 | 4.2322 | 0 |
| 4.1667 | 4.1503 | 1 |
| 4.0966 | 4.0952 | 2 |
| 4.0536 | 4.0500 | 3 |
| 3.9914 | 4.0121 | 4 |
| 3.9422 | 3.9765 | 5 |
| 3.9123 | 3.9451 | 6 |
| 3.8867 | 3.9304 | 7 |
| 3.8378 | 3.9014 | 8 |
| 3.8279 | 3.8862 | 9 |
| 3.7932 | 3.8835 | 10 |
| 3.7725 | 3.8579 | 11 |
| 3.7653 | 3.8483 | 12 |
| 3.7401 | 3.8159 | 13 |
| 3.7069 | 3.8228 | 14 |
| 3.7007 | 3.7981 | 15 |
| 3.6797 | 3.7953 | 16 |
| 3.6476 | 3.7833 | 17 |
| 3.6299 | 3.7847 | 18 |
| 3.6046 | 3.7627 | 19 |
| 3.5917 | 3.7639 | 20 |
| 3.5799 | 3.7540 | 21 |
| 3.5757 | 3.7310 | 22 |
| 3.5402 | 3.7316 | 23 |
| 3.5430 | 3.7213 | 24 |
| 3.5086 | 3.7202 | 25 |
| 3.4939 | 3.7163 | 26 |
| 3.4725 | 3.6984 | 27 |
| 3.4554 | 3.6964 | 28 |
| 3.4278 | 3.6964 | 29 |
| 3.4357 | 3.6970 | 30 |
| 3.4297 | 3.6938 | 31 |
| 3.4024 | 3.6820 | 32 |
| 3.3928 | 3.6600 | 33 |
| 3.3757 | 3.6642 | 34 |
| 3.3640 | 3.6555 | 35 |
| 3.3264 | 3.6627 | 36 |
| 3.3270 | 3.6347 | 37 |
| 3.3104 | 3.6260 | 38 |
| 3.2856 | 3.6419 | 39 |
| 3.2632 | 3.6561 | 40 |
| 3.2600 | 3.6350 | 41 |
| 3.2450 | 3.6322 | 42 |
| 3.2248 | 3.6355 | 43 |
| 3.2071 | 3.6192 | 44 |
| 3.1965 | 3.6300 | 45 |
| 3.1809 | 3.6332 | 46 |
| 3.1697 | 3.6217 | 47 |
| 3.1591 | 3.6306 | 48 |
| 3.1451 | 3.6444 | 49 |
| 3.1168 | 3.6353 | 50 |
| 3.0928 | 3.6329 | 51 |
| 3.1097 | 3.6163 | 52 |
| 3.0847 | 3.6268 | 53 |
| 3.0832 | 3.6534 | 54 |
| 3.0712 | 3.6443 | 55 |
| 3.0607 | 3.6229 | 56 |
| 3.0110 | 3.6439 | 57 |
| 3.0208 | 3.6574 | 58 |
| 3.0153 | 3.6063 | 59 |
| 2.9872 | 3.6301 | 60 |
| 2.9894 | 3.6558 | 61 |
| 2.9745 | 3.6310 | 62 |
| 2.9629 | 3.6169 | 63 |
| 2.9564 | 3.6445 | 64 |
| 2.9207 | 3.6498 | 65 |
| 2.9216 | 3.6453 | 66 |
| 2.9199 | 3.6353 | 67 |
| 2.8910 | 3.6616 | 68 |
| 2.8893 | 3.6600 | 69 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
guydebruyn/ppo-CartPole-v2
|
guydebruyn
| 2023-09-20T16:03:06Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T16:03:01Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -165.83 +/- 64.75
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'guydebruyn/ppo-CartPole-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
linoyts/lora-xl-linoy_face-0.0001-5e-05-1000-1-32
|
linoyts
| 2023-09-20T15:49:28Z | 4 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-20T05:39:45Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a TOK woman
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - LinoyTsaban/lora-xl-linoy_face-0.0001-5e-05-1000-1-32
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on photo of a TOK woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
MohanaPriyaa/image_classification
|
MohanaPriyaa
| 2023-09-20T15:43:27Z | 63 | 1 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-20T14:06:18Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: MohanaPriyaa/image_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MohanaPriyaa/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2925
- Validation Loss: 0.2284
- Train Accuracy: 0.909
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2925 | 0.2284 | 0.909 | 0 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
BramVanroy/mbart-large-cc25-ft-amr30-es
|
BramVanroy
| 2023-09-20T15:34:44Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-20T15:33:05Z |
---
base_model: facebook/mbart-large-cc25
tags:
- generated_from_trainer
model-index:
- name: es+no_processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# es+no_processing
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5874
- Smatch Precision: 74.08
- Smatch Recall: 76.84
- Smatch Fscore: 75.44
- Smatch Unparsable: 0
- Percent Not Recoverable: 0.2323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Smatch Precision | Smatch Recall | Smatch Fscore | Smatch Unparsable | Percent Not Recoverable |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:-------------:|:-----------------:|:-----------------------:|
| 0.3908 | 1.0 | 3477 | 1.4300 | 19.74 | 68.95 | 30.7 | 0 | 0.0 |
| 0.256 | 2.0 | 6954 | 0.8998 | 27.75 | 70.61 | 39.85 | 1 | 0.0581 |
| 0.0704 | 3.0 | 10431 | 0.8727 | 30.09 | 72.2 | 42.47 | 0 | 0.1161 |
| 0.0586 | 4.0 | 13908 | 0.7774 | 37.1 | 74.93 | 49.62 | 0 | 0.1161 |
| 0.1059 | 5.0 | 17385 | 0.6322 | 42.52 | 74.54 | 54.15 | 1 | 0.1161 |
| 0.0424 | 6.0 | 20862 | 0.6090 | 47.13 | 76.21 | 58.25 | 0 | 0.0 |
| 0.0139 | 7.0 | 24339 | 0.5768 | 48.3 | 77.31 | 59.46 | 0 | 0.0581 |
| 0.08 | 8.0 | 27817 | 0.5608 | 55.74 | 77.16 | 64.72 | 0 | 0.1161 |
| 0.0224 | 9.0 | 31294 | 0.5937 | 54.91 | 77.02 | 64.11 | 0 | 0.0581 |
| 0.0757 | 10.0 | 34771 | 0.5588 | 59.53 | 77.47 | 67.32 | 0 | 0.0581 |
| 0.0613 | 11.0 | 38248 | 0.5894 | 60.83 | 77.82 | 68.28 | 0 | 0.0581 |
| 0.1045 | 12.0 | 41725 | 0.5847 | 61.23 | 77.17 | 68.28 | 0 | 0.1742 |
| 0.012 | 13.0 | 45202 | 0.5588 | 65.61 | 77.47 | 71.05 | 0 | 0.0 |
| 0.0591 | 14.0 | 48679 | 0.5609 | 66.51 | 77.86 | 71.74 | 0 | 0.0581 |
| 0.0252 | 15.0 | 52156 | 0.5653 | 67.48 | 77.75 | 72.25 | 0 | 0.0 |
| 0.0129 | 16.0 | 55634 | 0.5602 | 68.92 | 77.57 | 72.99 | 0 | 0.0 |
| 0.0006 | 17.0 | 59111 | 0.5876 | 68.57 | 77.81 | 72.9 | 0 | 0.1742 |
| 0.0182 | 18.0 | 62588 | 0.5951 | 68.97 | 77.96 | 73.19 | 0 | 0.1161 |
| 0.018 | 19.0 | 66065 | 0.5865 | 70.63 | 77.68 | 73.98 | 0 | 0.0581 |
| 0.0097 | 20.0 | 69542 | 0.6073 | 71.68 | 77.38 | 74.42 | 0 | 0.1161 |
| 0.0021 | 21.0 | 73019 | 0.5984 | 72.25 | 77.92 | 74.98 | 0 | 0.0581 |
| 0.0371 | 22.0 | 76496 | 0.5907 | 72.92 | 77.59 | 75.18 | 0 | 0.1742 |
| 0.0382 | 23.0 | 79973 | 0.5928 | 73.06 | 77.49 | 75.21 | 0 | 0.1742 |
| 0.0148 | 24.0 | 83451 | 0.5903 | 73.98 | 77.15 | 75.53 | 0 | 0.0581 |
| 0.1326 | 25.0 | 86925 | 0.5874 | 74.08 | 76.84 | 75.44 | 0 | 0.2323 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Sudhee1997/Llama-2-7b-Custom-Recruit
|
Sudhee1997
| 2023-09-20T15:33:14Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-15T11:25:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
SakataHalmi/q-FrozenLake-v1-4x4-noSlippery
|
SakataHalmi
| 2023-09-20T15:30:22Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T15:30:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="SakataHalmi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
csakarwa/Model1cs
|
csakarwa
| 2023-09-20T15:24:47Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T15:24:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Gurusha/dreambooth_holding_phone_2
|
Gurusha
| 2023-09-20T15:24:14Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-20T14:30:04Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a human sks hand holding a phone in the hand
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Gurusha/dreambooth_holding_phone_2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a human sks hand holding a phone in the hand using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
ramchiluveru/MarketingCampaign
|
ramchiluveru
| 2023-09-20T15:14:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T15:14:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
badassbandit/taxi
|
badassbandit
| 2023-09-20T15:10:46Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T15:10:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="badassbandit/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CyberHarem/matsuo_chizuru_idolmastercinderellagirls
|
CyberHarem
| 2023-09-20T15:09:30Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/matsuo_chizuru_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-20T14:52:03Z |
---
license: mit
datasets:
- CyberHarem/matsuo_chizuru_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of matsuo_chizuru_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/matsuo_chizuru_idolmastercinderellagirls.pt` as the embedding and `4760/matsuo_chizuru_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.996. The trigger words are:
1. `matsuo_chizuru_idolmastercinderellagirls`
2. `short_hair, black_hair, blush, hair_ornament, hairclip, black_eyes, thick_eyebrows, purple_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.983 | [Download](5100/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.996** | [**Download**](4760/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.968 | [Download](4420/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.994 | [Download](4080/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.960 | [Download](3740/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.994 | [Download](3400/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.992 | [Download](3060/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.982 | [Download](2720/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.995 | [Download](2380/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.959 | [Download](2040/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.961 | [Download](1700/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.924 | [Download](1360/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.872 | [Download](1020/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.880 | [Download](680/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.313 | [Download](340/matsuo_chizuru_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/pattern_5.png) |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
arabooke/modeltest
|
arabooke
| 2023-09-20T15:09:20Z | 0 | 1 |
tensorflowtts
|
[
"tensorflowtts",
"python",
"programming",
"ru",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | null | 2023-09-20T15:08:28Z |
---
datasets:
- fka/awesome-chatgpt-prompts
language:
- ru
- en
library_name: tensorflowtts
tags:
- python
- programming
---
|
hosnasn/tannaz1-reza
|
hosnasn
| 2023-09-20T14:54:02Z | 0 | 0 | null |
[
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-20T14:47:47Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### tannaz1_reza Dreambooth model trained by hosnasn with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
csdc-atl/Baichuan2-13B-Chat-GPTQ-Int4
|
csdc-atl
| 2023-09-20T14:50:13Z | 170 | 2 |
transformers
|
[
"transformers",
"safetensors",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-15T18:06:52Z |
---
language:
- en
- zh
license: other
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
# Baichuan 2 13B Chat - Int4
<!-- description start -->
## 描述
该repo包含[Baichuan 2 7B Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat)的Int4 GPTQ模型文件。
<!-- description end -->
<!-- README_GPTQ.md-provided-files start -->
## GPTQ参数
该GPTQ文件都是用AutoGPTQ生成的。
- Bits: 4/8
- GS: 32/128
- Act Order: True
- Damp %: 0.1
- GPTQ dataset: 中文、英文混合数据集
- Sequence Length: 4096
| 模型版本 | agieval | ceval | cmmlu | size | 推理速度(A100-40G) |
|---|---|---|---|---|---|
| [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | 40.25 | 56.33 | 58.44 | 27.79g | 31.55 tokens/s |
| [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) | 39.01 | 56.63 | 57.81 | 9.08g | 18.45 tokens/s |
| [GPTQ-4bit-32g](https://huggingface.co/csdc-atl/Baichuan2-13B-Chat-GPTQ-Int4/tree/4bit-32g) | 38.64 | 57.18 | 57.47 | 9.87g | 27.35(hf) \ 38.28(autogptq) tokens/s |
| [GPTQ-4bit-128g](https://huggingface.co/csdc-atl/Baichuan2-13B-Chat-GPTQ-Int4/tree/main) | 38.78 | 56.42 | 57.78 | 9.14g | 28.74(hf) \ 39.24(autogptq) tokens/s |
<!-- README_GPTQ.md-provided-files end -->
## 如何在Python代码中使用此GPTQ模型
### 安装必要的依赖
必须: Transformers 4.32.0以上、Optimum 1.12.0以上、AutoGPTQ 0.4.2以上
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
如果您在使用预构建的pip包安装AutoGPTQ时遇到问题,请改为从源代码安装:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### 然后可以使用以下代码
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
model_name_or_path = "csdc-atl/Baichuan2-13B-Chat-Int4"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-7B-Chat")
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
"温故而知新"是一句中国古代的成语,出自《论语·为政》篇。这句话的意思是:通过回顾过去,我们可以发现新的知识和理解。换句话说,学习历史和经验可以让我们更好地理解现在和未来。
这句话鼓励我们在学习和生活中不断地回顾和反思过去的经验,从而获得新的启示和成长。通过重温旧的知识和经历,我们可以发现新的观点和理解,从而更好地应对不断变化的世界和挑战。
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
---
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:opensource@baichuan-inc.com).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-13B-Chat", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-13B-Chat", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-13B-Chat")
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
"温故而知新"是一句中国古代的成语,出自《论语·为政》篇。这句话的意思是:通过回顾过去,我们可以发现新的知识和理解。换句话说,学习历史和经验可以让我们更好地理解现在和未来。
这句话鼓励我们在学习和生活中不断地回顾和反思过去的经验,从而获得新的启示和成长。通过重温旧的知识和经历,我们可以发现新的观点和理解,从而更好地应对不断变化的世界和挑战。
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
Baichuan 2 模型的社区使用需遵循[《Baichuan 2 模型社区许可协议》]。Baichuan 2 支持商用。如果将 Baichuan 2 模型或其衍生品用作商业用途,请您按照如下方式联系许可方,以进行登记并向许可方申请书面授权:联系邮箱 [opensource@baichuan-inc.com]。
The use of the source code in this repository follows the open-source license Apache 2.0. Community use of the Baichuan 2 model must adhere to the [Community License for Baichuan 2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). Baichuan 2 supports commercial use. If you are using the Baichuan 2 models or their derivatives for commercial purposes, please contact the licensor in the following manner for registration and to apply for written authorization: Email opensource@baichuan-inc.com.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:opensource@baichuan-inc.com
[Email]: mailto:opensource@baichuan-inc.com
[opensource@baichuan-inc.com]: mailto:opensource@baichuan-inc.com
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
OpenDILabCommunity/Lunarlander-v2-C51
|
OpenDILabCommunity
| 2023-09-20T14:49:22Z | 0 | 0 |
pytorch
|
[
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"LunarLander-v2",
"en",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2023-04-15T12:48:19Z |
---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- LunarLander-v2
benchmark_name: OpenAI/Gym/Box2d
task_name: LunarLander-v2
pipeline_tag: reinforcement-learning
model-index:
- name: C51
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/Box2d-LunarLander-v2
type: OpenAI/Gym/Box2d-LunarLander-v2
metrics:
- type: mean_reward
value: 211.75 +/- 40.32
name: mean_reward
---
# Play **LunarLander-v2** with **C51** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **C51** implementation to OpenAI/Gym/Box2d **LunarLander-v2** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import C51Agent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = C51Agent(
env_id="LunarLander-v2", exp_name="LunarLander-v2-C51", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import C51Agent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/LunarLander-v2-C51")
# Instantiate the agent
agent = C51Agent(
env_id="LunarLander-v2", exp_name="LunarLander-v2-C51", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import C51Agent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = C51Agent(env_id="LunarLander-v2", exp_name="LunarLander-v2-C51")
# Train the agent
return_ = agent.train(step=int(4000000), collector_env_num=8, evaluator_env_num=8, debug=False)
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Box2d",
task_name="LunarLander-v2",
algo_name="C51",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/c51.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html",
installation_guide="pip3 install DI-engine[common_env]",
usage_file_by_git_clone="./c51/lunarlander_c51_deploy.py",
usage_file_by_huggingface_ding="./c51/lunarlander_c51_download.py",
train_file="./c51/lunarlander_c51.py",
repo_id="OpenDILabCommunity/LunarLander-v2-C51",
create_repo=False
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 260,
'n_evaluator_episode': 8,
'collector_env_num': 8,
'evaluator_env_num': 8,
'env_id': 'LunarLander-v2'
},
'policy': {
'model': {
'encoder_hidden_size_list': [512, 64],
'v_min': -30,
'v_max': 30,
'n_atom': 51,
'obs_shape': 8,
'action_shape': 4
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 10,
'batch_size': 64,
'learning_rate': 0.001,
'target_update_freq': 100,
'target_theta': 0.005,
'ignore_done': False
},
'collect': {
'collector': {},
'n_sample': 64,
'unroll_len': 1
},
'eval': {
'evaluator': {
'eval_freq': 1000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'figure_path': None,
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 260,
'n_episode': 8
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 100000
},
'eps': {
'type': 'exp',
'start': 0.95,
'end': 0.1,
'decay': 50000
}
},
'on_policy': False,
'cuda': False,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'c51',
'priority': False,
'priority_IS_weight': False,
'discount_factor': 0.99,
'nstep': 3,
'cfg_type': 'C51PolicyDict'
},
'exp_name': 'LunarLander-v2-C51',
'seed': 0,
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Lunarlander-v2-C51)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/c51.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-C51/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-C51/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 214.3 KB
- **Last Update Date:** 2023-09-20
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Box2d
- **Task:** LunarLander-v2
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.9
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html)
|
jbrinkw/fp1.1
|
jbrinkw
| 2023-09-20T14:27:23Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-20T14:11:47Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fp1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fp1.1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3884
- Rouge1: 0.1551
- Rouge2: 0.0523
- Rougel: 0.1467
- Rougelsum: 0.1467
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.4836 | 1.0 | 4040 | 1.3884 | 0.1551 | 0.0523 | 0.1467 | 0.1467 | 19.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
linoyts/lora-xl-linoy_face-0.0001-0.0001-1000-1-32
|
linoyts
| 2023-09-20T14:14:01Z | 4 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-20T13:53:42Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a TOK woman
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - LinoyTsaban/lora-xl-linoy_face-0.0001-0.0001-1000-1-32
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on photo of a TOK woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
CyberHarem/etou_misaki_idolmastercinderellagirls
|
CyberHarem
| 2023-09-20T14:13:39Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/etou_misaki_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-20T14:01:32Z |
---
license: mit
datasets:
- CyberHarem/etou_misaki_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of etou_misaki_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/etou_misaki_idolmastercinderellagirls.pt` as the embedding and `4760/etou_misaki_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.964. The trigger words are:
1. `etou_misaki_idolmastercinderellagirls`
2. `brown_hair, long_hair, green_eyes, jewelry, smile, earrings, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.936 | [Download](5100/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.964** | [**Download**](4760/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.917 | [Download](4420/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.889 | [Download](4080/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.937 | [Download](3740/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.919 | [Download](3400/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.909 | [Download](3060/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.865 | [Download](2720/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.879 | [Download](2380/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.848 | [Download](2040/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.873 | [Download](1700/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.847 | [Download](1360/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.817 | [Download](1020/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.815 | [Download](680/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.751 | [Download](340/etou_misaki_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
JcKosmos74/my_awesome_model_class
|
JcKosmos74
| 2023-09-20T14:11:35Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T12:05:01Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: JcKosmos74/my_awesome_model_class
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JcKosmos74/my_awesome_model_class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0646
- Validation Loss: 0.2478
- Train Accuracy: 0.928
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2543 | 0.1869 | 0.9275 | 0 |
| 0.1343 | 0.2071 | 0.927 | 1 |
| 0.0646 | 0.2478 | 0.928 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Jayanth2002/vit_base_patch16_224-finetuned-SkinDisease
|
Jayanth2002
| 2023-09-20T14:06:45Z | 216 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-20T10:01:37Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: vit_base_patch16_224-finetuned-SkinDisease
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9342629482071713
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_base_patch16_224-finetuned-SkinDisease
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1992
- Accuracy: 0.9343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9099 | 1.0 | 282 | 0.8248 | 0.7647 |
| 0.5848 | 2.0 | 565 | 0.4236 | 0.8748 |
| 0.3952 | 3.0 | 847 | 0.3154 | 0.9021 |
| 0.3957 | 4.0 | 1130 | 0.2695 | 0.9106 |
| 0.3146 | 5.0 | 1412 | 0.2381 | 0.9198 |
| 0.2883 | 6.0 | 1695 | 0.2407 | 0.9218 |
| 0.2264 | 7.0 | 1977 | 0.2160 | 0.9278 |
| 0.2339 | 8.0 | 2260 | 0.2121 | 0.9283 |
| 0.1966 | 9.0 | 2542 | 0.2044 | 0.9303 |
| 0.2366 | 9.98 | 2820 | 0.1992 | 0.9343 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Harshit0722/dolly-fine-tuned-on-med-data
|
Harshit0722
| 2023-09-20T14:00:38Z | 7 | 1 |
peft
|
[
"peft",
"biology",
"medical",
"code",
"question-answering",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:medmcqa",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:bsd",
"region:us"
] |
question-answering
| 2023-06-17T14:06:53Z |
---
language:
- en
license: bsd
library_name: peft
tags:
- biology
- medical
- code
datasets:
- databricks/databricks-dolly-15k
- medmcqa
metrics:
- code_eval
pipeline_tag: question-answering
base_model: databricks/dolly-v2-3b
---
|
AllenTAN/bloom-7b1-lora-tagger
|
AllenTAN
| 2023-09-20T14:00:33Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T13:50:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
nthngdy/headless-pythia-owt2-70m-raw
|
nthngdy
| 2023-09-20T13:54:34Z | 141 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:the_pile_openwebtext2",
"arxiv:2309.08351",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T15:19:36Z |
---
license: mit
datasets:
- the_pile_openwebtext2
language:
- en
pipeline_tag: text-generation
---
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** TBD
- **Paper:** https://arxiv.org/abs/2309.08351
### Model Architecture and Objective
This model is a Pythia-70m architecture trained on OpenWebText-2 using the Contrastive Weight Tying objective.
#### Software
[More Information Needed]
## Citation
**BibTeX:**
```bibtex
@misc{godey2023headless,
title={Headless Language Models: Learning without Predicting with Contrastive Weight Tying},
author={Nathan Godey and Éric de la Clergerie and Benoît Sagot},
year={2023},
eprint={2309.08351},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Model Card Authors
Nathan Godey
Eric de la Clergerie
Benoît Sagot
## Model Card Contact
nathan.godey@inria.fr
|
mayorov-s/Taxi-v3-QTable
|
mayorov-s
| 2023-09-20T13:50:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T13:50:17Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-QTable
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.81
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mayorov-s/Taxi-v3-QTable", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mayorov-s/q-FrozenLake-v1-4x4-noSlippery
|
mayorov-s
| 2023-09-20T13:45:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-21T11:34:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mayorov-s/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
larabe/combo
|
larabe
| 2023-09-20T13:17:01Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-09-20T11:23:32Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: combo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# combo
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AbdelKarim95/ppo-PyramidsRND
|
AbdelKarim95
| 2023-09-20T13:04:33Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-09-20T13:04:29Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AbdelKarim95/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
r0ll/Scammers
|
r0ll
| 2023-09-20T13:00:42Z | 0 | 0 | null |
[
"ru",
"license:openrail",
"region:us"
] | null | 2023-09-20T12:59:37Z |
---
license: openrail
language:
- ru
---
|
aipicasso/manga-diffusion-poc
|
aipicasso
| 2023-09-20T12:53:56Z | 78 | 6 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"stable-diffusion-diffusers",
"arxiv:2112.10752",
"arxiv:2212.03860",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-19T10:51:54Z |
---
license: other
tags:
- stable-diffusion
- text-to-image
- stable-diffusion-diffusers
- diffusers
inference: true
---
# Manga Diffusion PoC Model Card

English: [Click Here](README_en.md)
# はじめに
Manga Diffusion PoC (Proof-of-Concept) はAI Picasso社が作った漫画に特化した画像生成AIです。
Manga Diffusion PoC は 著作権者から許可された画像やパブリックドメインの画像、CC-0の画像だけで学習されています。
# ライセンス
このモデルのライセンスは [Mitsua Open RAIL-M License (More restrictive variant of CreativeML Open RAIL-M)](LICENSE) です。
このモデルは**商用利用可能**ですが、"生成された画像をAIが生成したものではないと誤魔化すことはできません"。
# 使い方
[ここ](poc.safetensors)からモデルをダウンロードできます。
Diffusersを使ってモデルをダウンロードすることもできます。
以下、一般的なモデルカードの日本語訳です。
## モデル詳細
- **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル
- **言語:** 日本語
- **ライセンス:** Mitsua Open RAIL-M License
- **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。
- **補足:**
- **参考文献:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## モデルの使用例
Stable Diffusion v2と同じ使い方です。
たくさんの方法がありますが、2つのパターンを提供します。
- Web UI
- Diffusers
### Web UIの場合
Stable Diffusion v2 の使い方と同じく、safetensor形式のモデルファイルをモデルフォルダに入れてください。
詳しいインストール方法は、[こちらの記事](https://note.com/it_navi/n/n6ffb66513769)を参照してください。
### Diffusersの場合
[🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。
まずは、以下のスクリプトを実行し、ライブラリをいれてください。
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
次のスクリプトを実行し、画像を生成してください。
```python
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch
model_id = "aipicasso/manga-diffusion-poc"
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "monochrome, grayscale, tower"
images = pipe(prompt, num_inference_steps=30, height=512, width=768).images
images[0].save("tower.png")
```
**注意**:
- [xformers](https://github.com/facebookresearch/xformers) を使うと早くなります。
- GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。
#### 想定される用途
- イラストや漫画、アニメの作画補助
- 商用・非商用は問わない
- 依頼の際のクリエイターとのコミュニケーション
- 画像生成サービスの商用提供
- 生成物の取り扱いには注意して使ってください。
- 自己表現
- このAIを使い、「あなた」らしさを発信すること
- 画像生成AIに関する報道
- 公共放送だけでなく、営利企業でも可能
- 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。
- 研究開発
- Discord上でのモデルの利用
- プロンプトエンジニアリング
- ファインチューニング(追加学習とも)
- DreamBooth など
- 他のモデルとのマージ
- 本モデルの性能をFIDなどで調べること
- 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること
- 教育
- 美大生や専門学校生の卒業制作
- 大学生の卒業論文や課題制作
- 先生が画像生成AIの現状を伝えること
- Hugging Face の Community にかいてある用途
- 日本語か英語で質問してください
#### 想定されない用途
- 物事を事実として表現するようなこと
- 収益化されているYouTubeなどのコンテンツへの使用
- 商用のサービスとして直接提供すること
- 先生を困らせるようなこと
- その他、創作業界に悪影響を及ぼすこと
# 使用してはいけない用途や悪意のある用途
- デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ)
- 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ)
- わいせつ物を頒布しないでください (刑法175条に違反するおそれ)
- いわゆる業界のマナーを守らないようなこと
- 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ)
- フェイクニュース
## モデルの限界やバイアス
### モデルの限界
- 拡散モデルや大規模言語モデルは、いまだに未知の部分が多く、その限界は判明していない。
### バイアス
- 拡散モデルや大規模言語モデルは、いまだに未知の部分が多く、バイアスは判明していない。
## 学習
**学習データ**
- [Mitsua Diffusion One](https://huggingface.co/Mitsua/mitsua-diffusion-one)
- [Manga 109-s](http://www.manga109.org/)
**学習プロセス**
- **ハードウェア:** A6000x2
## 評価結果
第三者による評価を求めています。
## 環境への影響
- **ハードウェアタイプ:** A6000x2
- **使用時間(単位は時間):** 100
- **学習した場所:** 日本
## 参考文献
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて書かれました。
|
csukuangfj/sherpa-onnx-streaming-zipformer-en-2023-06-21
|
csukuangfj
| 2023-09-20T12:53:44Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-06-21T07:23:26Z |
---
license: apache-2.0
---
The torchscript model is from
https://huggingface.co/marcoyang/icefall-libri-giga-pruned-transducer-stateless7-streaming-2023-04-04
The training code is from
https://github.com/k2-fsa/icefall/pull/984
|
ckmfong/q-FrozenLake-v1-4x4-noSlippery
|
ckmfong
| 2023-09-20T12:34:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T12:34:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ckmfong/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CyberHarem/yokoyama_chika_idolmastercinderellagirls
|
CyberHarem
| 2023-09-20T12:26:30Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yokoyama_chika_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-20T12:09:49Z |
---
license: mit
datasets:
- CyberHarem/yokoyama_chika_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yokoyama_chika_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/yokoyama_chika_idolmastercinderellagirls.pt` as the embedding and `4760/yokoyama_chika_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.980. The trigger words are:
1. `yokoyama_chika_idolmastercinderellagirls`
2. `twintails, long_hair, smile, open_mouth, blush, hair_ornament, brown_hair, bangs, brown_eyes, green_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.971 | [Download](5100/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.980** | [**Download**](4760/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.969 | [Download](4420/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.971 | [Download](4080/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.966 | [Download](3740/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.964 | [Download](3400/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.947 | [Download](3060/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.958 | [Download](2720/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.959 | [Download](2380/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.942 | [Download](2040/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.967 | [Download](1700/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.944 | [Download](1360/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.920 | [Download](1020/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.912 | [Download](680/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.692 | [Download](340/yokoyama_chika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ldos/text_shortening_model_v44
|
ldos
| 2023-09-20T12:17:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-xsum",
"base_model:finetune:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-20T07:29:11Z |
---
license: mit
base_model: facebook/bart-large-xsum
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v44
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8836
- Rouge1: 0.4921
- Rouge2: 0.2719
- Rougel: 0.4429
- Rougelsum: 0.4423
- Bert precision: 0.8746
- Bert recall: 0.8761
- Average word count: 8.7063
- Max word count: 17
- Min word count: 5
- Average token count: 16.2989
- % shortened texts with length > 12: 8.7302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 1.0083 | 1.0 | 83 | 1.4717 | 0.4904 | 0.2378 | 0.426 | 0.4266 | 0.8725 | 0.8732 | 8.5794 | 18 | 4 | 15.6164 | 6.3492 |
| 0.5702 | 2.0 | 166 | 1.4852 | 0.4722 | 0.2421 | 0.414 | 0.4143 | 0.869 | 0.8653 | 7.9101 | 14 | 4 | 13.6455 | 1.5873 |
| 0.4588 | 3.0 | 249 | 1.6283 | 0.5038 | 0.2733 | 0.4424 | 0.4422 | 0.8732 | 0.8794 | 9.0053 | 16 | 4 | 16.8386 | 8.9947 |
| 0.3586 | 4.0 | 332 | 1.6017 | 0.4965 | 0.2762 | 0.4381 | 0.4383 | 0.8709 | 0.8787 | 9.2381 | 18 | 4 | 16.3042 | 12.1693 |
| 0.2479 | 5.0 | 415 | 1.7497 | 0.4794 | 0.2613 | 0.4295 | 0.43 | 0.872 | 0.8702 | 8.3228 | 15 | 4 | 15.209 | 3.1746 |
| 0.2296 | 6.0 | 498 | 1.8482 | 0.4935 | 0.2739 | 0.4442 | 0.4443 | 0.8737 | 0.8755 | 8.7963 | 17 | 5 | 16.2989 | 7.1429 |
| 0.3065 | 7.0 | 581 | 1.9485 | 0.4765 | 0.2552 | 0.4213 | 0.4212 | 0.8698 | 0.8693 | 8.4683 | 17 | 5 | 15.6005 | 7.9365 |
| 0.2598 | 8.0 | 664 | 2.1608 | 0.4871 | 0.2585 | 0.4316 | 0.4319 | 0.8707 | 0.8736 | 8.963 | 16 | 5 | 16.6481 | 9.5238 |
| 0.2707 | 9.0 | 747 | 2.0966 | 0.4758 | 0.2603 | 0.4231 | 0.4246 | 0.8709 | 0.8717 | 8.4841 | 16 | 4 | 15.9312 | 7.1429 |
| 0.2099 | 10.0 | 830 | 2.2721 | 0.4777 | 0.2604 | 0.4246 | 0.4246 | 0.8735 | 0.8724 | 8.4312 | 15 | 4 | 15.9471 | 5.5556 |
| 0.1668 | 11.0 | 913 | 2.3536 | 0.4758 | 0.2541 | 0.4331 | 0.4328 | 0.8721 | 0.87 | 8.2857 | 14 | 4 | 15.7725 | 3.1746 |
| 0.1552 | 12.0 | 996 | 2.4572 | 0.484 | 0.2562 | 0.4313 | 0.4304 | 0.8726 | 0.875 | 8.828 | 17 | 4 | 16.246 | 7.9365 |
| 0.2141 | 13.0 | 1079 | 2.4485 | 0.4785 | 0.2631 | 0.4257 | 0.4252 | 0.8678 | 0.8736 | 9.1402 | 19 | 4 | 16.6561 | 11.3757 |
| 0.1348 | 14.0 | 1162 | 2.5012 | 0.4821 | 0.2613 | 0.4292 | 0.4296 | 0.8706 | 0.8738 | 8.8783 | 17 | 4 | 16.5185 | 10.0529 |
| 0.074 | 15.0 | 1245 | 2.5309 | 0.4915 | 0.2745 | 0.445 | 0.444 | 0.8764 | 0.8768 | 8.6667 | 16 | 4 | 16.2513 | 9.2593 |
| 0.1822 | 16.0 | 1328 | 2.5735 | 0.4709 | 0.2566 | 0.4239 | 0.4232 | 0.872 | 0.8692 | 8.2063 | 15 | 3 | 15.7249 | 4.2328 |
| 0.086 | 17.0 | 1411 | 2.8597 | 0.4831 | 0.2675 | 0.4373 | 0.4372 | 0.8722 | 0.8743 | 8.754 | 16 | 5 | 16.5476 | 8.7302 |
| 0.0872 | 18.0 | 1494 | 2.7420 | 0.4831 | 0.2677 | 0.4367 | 0.4353 | 0.8724 | 0.873 | 8.664 | 17 | 5 | 16.3016 | 7.672 |
| 0.1164 | 19.0 | 1577 | 2.8790 | 0.4867 | 0.269 | 0.4388 | 0.4381 | 0.8737 | 0.8755 | 8.7725 | 17 | 5 | 16.4418 | 8.9947 |
| 0.1101 | 20.0 | 1660 | 2.8836 | 0.4921 | 0.2719 | 0.4429 | 0.4423 | 0.8746 | 0.8761 | 8.7063 | 17 | 5 | 16.2989 | 8.7302 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AbdelKarim95/ppo-SnowballTarget
|
AbdelKarim95
| 2023-09-20T12:12:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-20T12:12:19Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AbdelKarim95/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
alexandrainst/da-ner-base
|
alexandrainst
| 2023-09-20T11:56:44Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"da",
"dataset:dane",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
language:
- da
license: apache-2.0
datasets:
- dane
widget:
- text: Jens Peter Hansen kommer fra Danmark
---
# BERT fine-tuned for Named Entity Recognition in Danish
The model tags tokens (in Danish sentences) with named entity tags (BIO format) [PER, ORG, LOC, MISC].
The pretrained language model used for fine-tuning is the [Danish BERT](https://github.com/certainlyio/nordic_bert) by BotXO.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ner.html#bert) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForTokenClassification
model = BertForTokenClassification.from_pretrained("alexandrainst/da-ner-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-ner-base")
```
## Training Data
The model has been trained on the [DaNE](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane).
|
alexandrainst/da-binary-emotion-classification-base
|
alexandrainst
| 2023-09-20T11:56:16Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- da
license: apache-2.0
widget:
- text: Der er et træ i haven.
---
# Danish BERT for emotion detection
The BERT Emotion model detects whether a Danish text is emotional or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
alexandrainst/da-hatespeech-detection-base
|
alexandrainst
| 2023-09-20T11:56:09Z | 122 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- da
license: apache-2.0
widget:
- text: Senile gamle idiot
---
# Danish BERT for hate speech (offensive language) detection
The BERT HateSpeech model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
alexandrainst/da-ned-base
|
alexandrainst
| 2023-09-20T11:55:31Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- da
license: apache-2.0
---
# XLM-Roberta fine-tuned for Named Entity Disambiguation
Given a sentence and a knowledge graph context, the model detects whether a specific entity (represented by the knowledge graph context) is mentioned in the sentence (binary classification).
The base language model used is the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base).
Here is how to use the model:
```python
from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
model = XLMRobertaForSequenceClassification.from_pretrained("alexandrainst/da-ned-base")
tokenizer = XLMRobertaTokenizer.from_pretrained("alexandrainst/da-ned-base")
```
The tokenizer takes 2 strings has input: the sentence and the knowledge graph (KG) context.
Here is an example:
```python
sentence = "Karen Blixen vendte tilbage til Danmark, hvor hun boede resten af sit liv på Rungstedlund, som hun arvede efter sin mor i 1939"
kg_context = "udmærkelser modtaget Kritikerprisen udmærkelser modtaget Tagea Brandts Rejselegat udmærkelser modtaget Ingenio et arti udmærkelser modtaget Holbergmedaljen udmærkelser modtaget De Gyldne Laurbær mor Ingeborg Dinesen ægtefælle Bror von Blixen-Finecke køn kvinde Commons-kategori Karen Blixen LCAuth no95003722 VIAF 90663542 VIAF 121643918 GND-identifikator 118637878 ISNI 0000 0001 2096 6265 ISNI 0000 0003 6863 4408 ISNI 0000 0001 1891 0457 fødested Rungstedlund fødested Rungsted dødssted Rungstedlund dødssted København statsborgerskab Danmark NDL-nummer 00433530 dødsdato +1962-09-07T00:00:00Z dødsdato +1962-01-01T00:00:00Z fødselsdato +1885-04-17T00:00:00Z fødselsdato +1885-01-01T00:00:00Z AUT NKC jn20000600905 AUT NKC jo2015880827 AUT NKC xx0196181 emnets hovedkategori Kategori:Karen Blixen tilfælde af menneske billede Karen Blixen cropped from larger original.jpg IMDb-identifikationsnummer nm0227598 Freebase-ID /m/04ymd8w BNF 118857710 beskæftigelse skribent beskæftigelse selvbiograf beskæftigelse novelleforfatter ..."
```
A KG context, for a specific entity, can be generated from its Wikidata page.
In the previous example, the KG context is a string representation of the Wikidata page of [Karen Blixen (QID=Q182804)](https://www.wikidata.org/wiki/Q182804).
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ned.html#xlmr) for more details about how to generate a KG context.
## Training Data
The model has been trained on the [DaNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#daned) and [DaWikiNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dawikined) datasets.
|
alexandrainst/da-discourse-coherence-base
|
alexandrainst
| 2023-09-20T11:50:37Z | 114 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"da",
"dataset:ajders/ddisco",
"base_model:NbAiLab/nb-bert-base",
"base_model:finetune:NbAiLab/nb-bert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-08T18:42:51Z |
---
language:
- da
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ajders/ddisco
metrics:
- accuracy
base_model: NbAiLab/nb-bert-base
model-index:
- name: ddisco_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# da-discourse-coherence-base
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the [DDisco](https://huggingface.co/datasets/ajders/ddisco) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7487
- Accuracy: 0.6915
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 703
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3422 | 0.4 | 5 | 1.0166 | 0.5721 |
| 0.9645 | 0.8 | 10 | 0.8966 | 0.5721 |
| 0.9854 | 1.24 | 15 | 0.8499 | 0.5721 |
| 0.8628 | 1.64 | 20 | 0.8379 | 0.6517 |
| 0.9046 | 2.08 | 25 | 0.8228 | 0.5721 |
| 0.8361 | 2.48 | 30 | 0.7980 | 0.5821 |
| 0.8158 | 2.88 | 35 | 0.8095 | 0.5821 |
| 0.8689 | 3.32 | 40 | 0.7989 | 0.6169 |
| 0.8125 | 3.72 | 45 | 0.7730 | 0.6965 |
| 0.843 | 4.16 | 50 | 0.7566 | 0.6418 |
| 0.7421 | 4.56 | 55 | 0.7840 | 0.6517 |
| 0.7949 | 4.96 | 60 | 0.7531 | 0.6915 |
| 0.828 | 5.4 | 65 | 0.7464 | 0.6816 |
| 0.7438 | 5.8 | 70 | 0.7487 | 0.6915 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.0a0+d0d6b1f
- Datasets 2.9.0
- Tokenizers 0.13.2
### Contributor
[ajders](https://github.com/AJDERS)
|
alexandrainst/da-offensive-detection-base
|
alexandrainst
| 2023-09-20T11:50:26Z | 105 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-12T10:04:35Z |
---
license: apache-2.0
widget:
- text: Din store idiot
base_model: xlm-roberta-base
---
# Danish Offensive Text Detection based on XLM-Roberta-Base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a dataset consisting of approximately 5 million Facebook comments on [DR](https://dr.dk/)'s public Facebook pages. The labels have been automatically generated using weak supervision, based on the [Snorkel](https://www.snorkel.org/) framework.
The model achieves SOTA on a test set consisting of 600 Facebook comments annotated using majority vote by three annotators, of which 35.8% were labelled as offensive:
| **Model** | **Precision** | **Recall** | **F1-score** | **F2-score** |
| :-------- | :------------ | :--------- | :----------- | :----------- |
| `alexandrainst/da-offensive-detection-base` (this) | 74.81% | **89.77%** | **81.61%** | **86.32%** |
| [`alexandrainst/da-offensive-detection-small`](https://huggingface.co/alexandrainst/da-offensive-detection-small) | 74.13% | 89.30% | 81.01% | 85.79% |
| [`A&ttack`](https://github.com/ogtal/A-ttack) | **97.32%** | 50.70% | 66.67% | 56.07% |
| [`alexandrainst/da-hatespeech-detection-small`](https://huggingface.co/alexandrainst/da-hatespeech-detection-small) | 86.43% | 56.28% | 68.17% | 60.50% |
| [`Guscode/DKbert-hatespeech-detection`](https://huggingface.co/Guscode/DKbert-hatespeech-detection) | 75.41% | 42.79% | 54.60% | 46.84% |
## Using the model
You can use the model simply by running the following:
```python
>>> from transformers import pipeline
>>> offensive_text_pipeline = pipeline(model="alexandrainst/da-offensive-detection-base")
>>> offensive_text_pipeline("Din store idiot")
[{'label': 'Offensive', 'score': 0.9997463822364807}]
```
Processing multiple documents at the same time can be done as follows:
```python
>>> offensive_text_pipeline(["Din store idiot", "ej hvor godt :)"])
[{'label': 'Offensive', 'score': 0.9997463822364807}, {'label': 'Not offensive', 'score': 0.9996451139450073}]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- gradient_accumulation_steps: 1
- total_train_batch_size: 32
- seed: 4242
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- max_steps: 500000
- fp16: True
- eval_steps: 1000
- early_stopping_patience: 100
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
honzatoegel/Llama-2-70b-chat-hf-gramma-corrections-de-en-2-overfitt
|
honzatoegel
| 2023-09-20T11:42:12Z | 2 | 0 |
peft
|
[
"peft",
"de",
"en",
"dataset:honzatoegel/lola-gramma-de-en",
"base_model:meta-llama/Llama-2-70b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-70b-chat-hf",
"region:us"
] | null | 2023-08-30T13:40:43Z |
---
language:
- de
- en
library_name: peft
datasets:
- honzatoegel/lola-gramma-de-en
base_model: meta-llama/Llama-2-70b-chat-hf
---
## Examples
As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning.
Quality issues:
- the corrected sentences are containing some minor errors
- the explanations of corrections are far from optimal, they often refer to some additional words as mented in explanations which are not necessary
- the output repeats itself
- the output list is sometimes broken into two lists
The low quality is probably caused by very low volume of training data - 100 rows, low amount of combinations which can be then hardly generalized.
### Example 1
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:
#### Output
Er geht dort, aber ich muss anders gemacht.
Repairs:
- Instead of "gehen" it should be "goes" - the verb "gehen" is in the third person singular.
- Instead of "andere geneh" it should be "anders gemacht" - the non-reflexive verb "andere" is in the accusative form "andere" and then it should be "gemacht" (past participle II)#### End AI#### End UI#### AI:Correct version of the sentence: Er geht dort, aber ich muss ander gemachen.
Repairs:
- Instead of "gehen" it should be "goes" - the
### Example 2
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Mir es geht gut, danke#### AI:Correct version of the sentence:
#### Output
"Mir geht es gut, danke."
Repairs:
- "Mir es geht" the verb should be placed in the second position "Mir geht es" - because of the correct word order.
- The sentence lacks a period at the end.#### End AI:》
Mir es geht gut, danke
Correct version of the sentence: "Mir geht es gut, danke."
Repairs:
- "Mir es geht" the verb should be placed in the second position "Mir geht es" - because of the correct word order.
- The sentence lacks a period at the end.#### End
Mir es geht gut, danke
Correct version of the sentence
### Example 3
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Habe ich das wirklich gemachen?#### AI:Correct version of the sentence:
#### Output
Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" - the verb "gemachen" must be conjugated in the past tense, third person singular.#### End AI#### End User:Correct version of the sentence: Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" - the verb "gemachen" must be conjugated in the past tense, third person singular.#### End AI####End User:Correct version of the sentence: Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" -
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
honzatoegel/Llama-2-7b-chat-hf-gramma-corrections-de-en-overfitt
|
honzatoegel
| 2023-09-20T11:42:00Z | 1 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-08-30T08:39:05Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
honzatoegel/Llama-2-70b-chat-hf-gramma-corrections-de-en
|
honzatoegel
| 2023-09-20T11:41:45Z | 1 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-70b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-70b-chat-hf",
"region:us"
] | null | 2023-08-27T13:40:56Z |
---
library_name: peft
base_model: meta-llama/Llama-2-70b-chat-hf
---
## Examples
As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning.
Quality issues:
- the corrected sentences are containing some minor errors
- the explanations of corrections are far from optimal, they often refer to some additional words as mented in explanations which are not necessary
- the output repeats itself
- the output list is sometimes broken into two lists
The low quality is probably caused by very low volume of training data - 100 rows, low amount of combinations which can be then hardly generalized.
### Example 1
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Ich mussen sehr rückwärts fahren.#### AI:Correct version of the sentence:
#### Output
Ich muss sehr zurückfahren.
Repairs:
- Instead of "rückwärts" it should be "zurück" - this is the correct form of the verb "to drive back".#### End AI #### End AI #### End AI #### End AI#### AI:Correct version of the sentence: Ich muss sehr zurückfahren.
Repairs:
- Instead of "mussen" it should be "muss" - this is the correct form of the verb "to must" in the third person singular.#### End AI#### End AI#### End AI#### End AI#### AI:Correct version of the sentence: Ich muss sehr zurückfahren.
Repairs:
### Example 2
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:
#### Output
Er geht dort, aber ich muss anders gehen.
Repairs:
- Instead of "gehen" it should be "geht" - it is the third person singular of the verb "gehen".
- Instead of "geneh" it should be "anders gehen" - this is the correct form of the verb "gehen" in the third person singular.#### End AI %}
Repairs:
- Instead of "gehen" it should be "geht" - it is the third person singular of the verb "gehen".
- Instead of "geneh" it should be "anders gehen" - this is the correct form of the verb "gehen" in
### Example 3
#### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Habe ich das wirklich gemachen?#### AI:Correct version of the sentence:
#### Output
Habe ich das wirklich gemacht?
Repairs:
- Instead of "gemachen" it should be "gemacht" - this is the third person singular of the verb "machen" in the past tense.#### End AI %}
Repairs:
- Instead of "Habe ich das wirklich gemachen?" it should be "Habe ich das wirklich gemacht?" - This is the correct sentence with all the required information.#### End AI %}
Repairs:
- The verb "gemachen" is in the wrong form, it should be "gemacht" - this is the third person singular of the verb "machen" in the past tense.#### End AI %}
Repairs:
## Training procedure
Trained on: 1x RTX A6000
Used: 30GB Ram, 47GB GPU, 130GB Disc
8 Epochs, cca 25 minutes, Loss: 0.36
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
JcKosmos74/my_awesome_wnut_model
|
JcKosmos74
| 2023-09-20T11:41:09Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-20T11:38:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: JcKosmos74/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JcKosmos74/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1232
- Validation Loss: 0.2581
- Train Precision: 0.5543
- Train Recall: 0.4091
- Train F1: 0.4708
- Train Accuracy: 0.9444
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3220 | 0.3166 | 0.3671 | 0.0909 | 0.1457 | 0.9273 | 0 |
| 0.1617 | 0.2656 | 0.5199 | 0.3600 | 0.4254 | 0.9409 | 1 |
| 0.1232 | 0.2581 | 0.5543 | 0.4091 | 0.4708 | 0.9444 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hosnasn/Reza_DB
|
hosnasn
| 2023-09-20T11:33:50Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-20T11:33:46Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks Reza
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
CyberHarem/yuuki_haru_idolmastercinderellagirls
|
CyberHarem
| 2023-09-20T11:33:12Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yuuki_haru_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-20T11:12:50Z |
---
license: mit
datasets:
- CyberHarem/yuuki_haru_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yuuki_haru_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5040, you need to download `5040/yuuki_haru_idolmastercinderellagirls.pt` as the embedding and `5040/yuuki_haru_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5040**, with the score of 0.807. The trigger words are:
1. `yuuki_haru_idolmastercinderellagirls`
2. `orange_hair, blush, long_hair, bangs, purple_eyes, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8400 | 0.793 | [Download](8400/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](8400/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](8400/previews/pattern_9.png) |  |  | [<NSFW, click to see>](8400/previews/pattern_12.png) |  |  | [<NSFW, click to see>](8400/previews/pattern_15.png) |  | [<NSFW, click to see>](8400/previews/bondage.png) | [<NSFW, click to see>](8400/previews/free.png) |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| 7840 | 0.753 | [Download](7840/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](7840/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](7840/previews/pattern_9.png) |  |  | [<NSFW, click to see>](7840/previews/pattern_12.png) |  |  | [<NSFW, click to see>](7840/previews/pattern_15.png) |  | [<NSFW, click to see>](7840/previews/bondage.png) | [<NSFW, click to see>](7840/previews/free.png) |  |  | [<NSFW, click to see>](7840/previews/nude.png) | [<NSFW, click to see>](7840/previews/nude2.png) |  |  |
| 7280 | 0.781 | [Download](7280/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](7280/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_9.png) |  |  | [<NSFW, click to see>](7280/previews/pattern_12.png) |  |  | [<NSFW, click to see>](7280/previews/pattern_15.png) |  | [<NSFW, click to see>](7280/previews/bondage.png) | [<NSFW, click to see>](7280/previews/free.png) |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6720 | 0.774 | [Download](6720/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](6720/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](6720/previews/pattern_9.png) |  |  | [<NSFW, click to see>](6720/previews/pattern_12.png) |  |  | [<NSFW, click to see>](6720/previews/pattern_15.png) |  | [<NSFW, click to see>](6720/previews/bondage.png) | [<NSFW, click to see>](6720/previews/free.png) |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6160 | 0.767 | [Download](6160/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](6160/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](6160/previews/pattern_9.png) |  |  | [<NSFW, click to see>](6160/previews/pattern_12.png) |  |  | [<NSFW, click to see>](6160/previews/pattern_15.png) |  | [<NSFW, click to see>](6160/previews/bondage.png) | [<NSFW, click to see>](6160/previews/free.png) |  |  | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) |  |  |
| 5600 | 0.783 | [Download](5600/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5600/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](5600/previews/pattern_9.png) |  |  | [<NSFW, click to see>](5600/previews/pattern_12.png) |  |  | [<NSFW, click to see>](5600/previews/pattern_15.png) |  | [<NSFW, click to see>](5600/previews/bondage.png) | [<NSFW, click to see>](5600/previews/free.png) |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| **5040** | **0.807** | [**Download**](5040/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5040/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](5040/previews/pattern_9.png) |  |  | [<NSFW, click to see>](5040/previews/pattern_12.png) |  |  | [<NSFW, click to see>](5040/previews/pattern_15.png) |  | [<NSFW, click to see>](5040/previews/bondage.png) | [<NSFW, click to see>](5040/previews/free.png) |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4480 | 0.787 | [Download](4480/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4480/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](4480/previews/pattern_9.png) |  |  | [<NSFW, click to see>](4480/previews/pattern_12.png) |  |  | [<NSFW, click to see>](4480/previews/pattern_15.png) |  | [<NSFW, click to see>](4480/previews/bondage.png) | [<NSFW, click to see>](4480/previews/free.png) |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3920 | 0.755 | [Download](3920/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3920/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](3920/previews/pattern_9.png) |  |  | [<NSFW, click to see>](3920/previews/pattern_12.png) |  |  | [<NSFW, click to see>](3920/previews/pattern_15.png) |  | [<NSFW, click to see>](3920/previews/bondage.png) | [<NSFW, click to see>](3920/previews/free.png) |  |  | [<NSFW, click to see>](3920/previews/nude.png) | [<NSFW, click to see>](3920/previews/nude2.png) |  |  |
| 3360 | 0.802 | [Download](3360/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3360/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](3360/previews/pattern_9.png) |  |  | [<NSFW, click to see>](3360/previews/pattern_12.png) |  |  | [<NSFW, click to see>](3360/previews/pattern_15.png) |  | [<NSFW, click to see>](3360/previews/bondage.png) | [<NSFW, click to see>](3360/previews/free.png) |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2800 | 0.789 | [Download](2800/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2800/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](2800/previews/pattern_9.png) |  |  | [<NSFW, click to see>](2800/previews/pattern_12.png) |  |  | [<NSFW, click to see>](2800/previews/pattern_15.png) |  | [<NSFW, click to see>](2800/previews/bondage.png) | [<NSFW, click to see>](2800/previews/free.png) |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2240 | 0.713 | [Download](2240/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2240/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](2240/previews/pattern_9.png) |  |  | [<NSFW, click to see>](2240/previews/pattern_12.png) |  |  | [<NSFW, click to see>](2240/previews/pattern_15.png) |  | [<NSFW, click to see>](2240/previews/bondage.png) | [<NSFW, click to see>](2240/previews/free.png) |  |  | [<NSFW, click to see>](2240/previews/nude.png) | [<NSFW, click to see>](2240/previews/nude2.png) |  |  |
| 1680 | 0.736 | [Download](1680/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1680/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](1680/previews/pattern_9.png) |  |  | [<NSFW, click to see>](1680/previews/pattern_12.png) |  |  | [<NSFW, click to see>](1680/previews/pattern_15.png) |  | [<NSFW, click to see>](1680/previews/bondage.png) | [<NSFW, click to see>](1680/previews/free.png) |  |  | [<NSFW, click to see>](1680/previews/nude.png) | [<NSFW, click to see>](1680/previews/nude2.png) |  |  |
| 1120 | 0.664 | [Download](1120/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1120/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](1120/previews/pattern_9.png) |  |  | [<NSFW, click to see>](1120/previews/pattern_12.png) |  |  | [<NSFW, click to see>](1120/previews/pattern_15.png) |  | [<NSFW, click to see>](1120/previews/bondage.png) | [<NSFW, click to see>](1120/previews/free.png) |  |  | [<NSFW, click to see>](1120/previews/nude.png) | [<NSFW, click to see>](1120/previews/nude2.png) |  |  |
| 560 | 0.653 | [Download](560/yuuki_haru_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](560/previews/pattern_4.png) |  |  |  |  | [<NSFW, click to see>](560/previews/pattern_9.png) |  |  | [<NSFW, click to see>](560/previews/pattern_12.png) |  |  | [<NSFW, click to see>](560/previews/pattern_15.png) |  | [<NSFW, click to see>](560/previews/bondage.png) | [<NSFW, click to see>](560/previews/free.png) |  |  | [<NSFW, click to see>](560/previews/nude.png) | [<NSFW, click to see>](560/previews/nude2.png) |  |  |
|
AbdelKarim95/Reinforce-2bis
|
AbdelKarim95
| 2023-09-20T11:30:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T11:30:53Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2bis
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.10 +/- 33.44
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Gurusha/dreambooth_peace_sign
|
Gurusha
| 2023-09-20T11:26:46Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-20T09:58:58Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a human sks hand making peace sign, with index and middle finger pointing out and rest of the fingers folded
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Gurusha/dreambooth_peace_sign
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a human sks hand making peace sign, with index and middle finger pointing out and rest of the fingers folded using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
onceiapp/swin-tiny-patch4-window7-224-finetuned-eurosat
|
onceiapp
| 2023-09-20T11:25:20Z | 211 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-20T09:01:15Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9850249584026622
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0340
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1854 | 0.99 | 21 | 0.0688 | 0.9800 |
| 0.0438 | 1.98 | 42 | 0.0410 | 0.9817 |
| 0.0194 | 2.96 | 63 | 0.0340 | 0.9850 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mychen76/donut-receipt_v2
|
mychen76
| 2023-09-20T11:19:38Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-09-20T09:51:58Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: donut-receipt_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-receipt_v2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Gurusha/dreambooth_holding_umbrella
|
Gurusha
| 2023-09-20T11:18:31Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-20T09:55:59Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a human sks hand holding an umbrella
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Gurusha/dreambooth_holding_umbrella
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a human sks hand holding an umbrella using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
re2panda/polyglot_12b_grade_school_math3
|
re2panda
| 2023-09-20T11:14:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-18T11:24:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
amitonHFace/ppo-LunarLander-v2
|
amitonHFace
| 2023-09-20T11:13:33Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-20T09:59:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 215.91 +/- 20.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dim/ruGPT-13b_dolly_oasst1_chip2
|
dim
| 2023-09-20T11:03:21Z | 3 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T11:00:43Z |
---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/ruGPT-13b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=2,
bot_token_id=46787,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
# Гавайские острова - это рай для любителей природы, культуры и приключений. Это место, где можно исследовать тропические леса, плавать с дельфинами, кататься на волнах прибоя или просто наслаждаться солнцем и пляжем. В дополнение к этому, здесь есть множество достопримечательностей, которые стоит посетить, таких как вулканы Мауна-Лоа и Килауэа, водопады Ваймеа, пещеры Халеакала и многое другое. Если вы ищете что-то более расслабляющее, то посещение гавайских ресторанов и баров может быть отличным способом провести время.
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
tvganesh/test_trainer1
|
tvganesh
| 2023-09-20T10:58:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-20T10:54:37Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test_trainer1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge1: 0.8111
- Rouge2: 0.8008
- Rougel: 0.812
- Rougelsum: 0.8109
- Gen Len: 18.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 13 | 0.3042 | 0.7518 | 0.7064 | 0.7515 | 0.7499 | 18.2 |
| No log | 2.0 | 26 | 0.0621 | 0.7853 | 0.7648 | 0.7778 | 0.778 | 18.4667 |
| No log | 3.0 | 39 | 0.0600 | 0.7809 | 0.7539 | 0.7793 | 0.7794 | 18.3333 |
| No log | 4.0 | 52 | 0.0293 | 0.8073 | 0.7961 | 0.8076 | 0.8069 | 18.4 |
| No log | 5.0 | 65 | 0.0304 | 0.8053 | 0.7881 | 0.803 | 0.8027 | 18.4667 |
| No log | 6.0 | 78 | 0.0167 | 0.7787 | 0.7634 | 0.7794 | 0.7792 | 18.7 |
| No log | 7.0 | 91 | 0.0203 | 0.8076 | 0.7952 | 0.8083 | 0.8072 | 18.5333 |
| No log | 8.0 | 104 | 0.0418 | 0.7722 | 0.7493 | 0.7711 | 0.7695 | 18.7667 |
| No log | 9.0 | 117 | 0.0153 | 0.799 | 0.7804 | 0.7969 | 0.7964 | 18.4 |
| No log | 10.0 | 130 | 0.0225 | 0.7963 | 0.7804 | 0.7968 | 0.7952 | 18.5 |
| No log | 11.0 | 143 | 0.0119 | 0.7832 | 0.7676 | 0.784 | 0.7837 | 18.5 |
| No log | 12.0 | 156 | 0.0118 | 0.8023 | 0.7863 | 0.8024 | 0.8011 | 18.5 |
| No log | 13.0 | 169 | 0.0411 | 0.8019 | 0.7916 | 0.8034 | 0.8025 | 18.2667 |
| No log | 14.0 | 182 | 0.0048 | 0.8017 | 0.791 | 0.8029 | 0.8022 | 18.5 |
| No log | 15.0 | 195 | 0.0038 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 16.0 | 208 | 0.0080 | 0.8091 | 0.7967 | 0.8093 | 0.8086 | 18.5 |
| No log | 17.0 | 221 | 0.0046 | 0.8092 | 0.7967 | 0.8103 | 0.8095 | 18.5 |
| No log | 18.0 | 234 | 0.0023 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 19.0 | 247 | 0.0097 | 0.8105 | 0.799 | 0.8116 | 0.8105 | 18.5 |
| No log | 20.0 | 260 | 0.0024 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 21.0 | 273 | 0.0018 | 0.8111 | 0.7995 | 0.812 | 0.8109 | 18.5 |
| No log | 22.0 | 286 | 0.0030 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 23.0 | 299 | 0.0042 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 24.0 | 312 | 0.0065 | 0.8102 | 0.8 | 0.8114 | 0.8099 | 18.5 |
| No log | 25.0 | 325 | 0.0004 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 26.0 | 338 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 27.0 | 351 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 28.0 | 364 | 0.0010 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 29.0 | 377 | 0.0002 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 30.0 | 390 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 31.0 | 403 | 0.0020 | 0.8093 | 0.7975 | 0.8103 | 0.8089 | 18.5 |
| No log | 32.0 | 416 | 0.0014 | 0.8093 | 0.7975 | 0.8103 | 0.8089 | 18.5 |
| No log | 33.0 | 429 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 34.0 | 442 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 35.0 | 455 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 36.0 | 468 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 37.0 | 481 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 38.0 | 494 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| 0.068 | 39.0 | 507 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| 0.068 | 40.0 | 520 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
amiiin/methods100
|
amiiin
| 2023-09-20T10:56:44Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-09-20T10:40:45Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: methods100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# methods100
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
amiiin/results
|
amiiin
| 2023-09-20T10:52:50Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-08-31T13:18:30Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: methods100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# methods100
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tvganesh/test_trainer
|
tvganesh
| 2023-09-20T10:52:48Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:tvganesh/test_trainer",
"base_model:finetune:tvganesh/test_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-14T07:05:08Z |
---
license: apache-2.0
base_model: tvganesh/test_trainer
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [tvganesh/test_trainer](https://huggingface.co/tvganesh/test_trainer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge1: 0.8325
- Rouge2: 0.8187
- Rougel: 0.8294
- Rougelsum: 0.8312
- Gen Len: 18.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 5 | 0.2345 | 0.7001 | 0.6536 | 0.6998 | 0.6957 | 16.3 |
| No log | 2.0 | 10 | 0.1472 | 0.7958 | 0.7695 | 0.7929 | 0.7965 | 18.3 |
| No log | 3.0 | 15 | 0.1174 | 0.7196 | 0.6705 | 0.7187 | 0.7118 | 16.3 |
| No log | 4.0 | 20 | 0.0554 | 0.7977 | 0.774 | 0.7907 | 0.7958 | 18.6 |
| No log | 5.0 | 25 | 0.0725 | 0.8205 | 0.8074 | 0.8188 | 0.8212 | 18.6 |
| No log | 6.0 | 30 | 0.0281 | 0.8114 | 0.7929 | 0.8098 | 0.8123 | 18.6 |
| No log | 7.0 | 35 | 0.0451 | 0.7959 | 0.7678 | 0.7908 | 0.7945 | 18.6 |
| No log | 8.0 | 40 | 0.0438 | 0.8285 | 0.8061 | 0.8205 | 0.8227 | 18.5 |
| No log | 9.0 | 45 | 0.0178 | 0.8249 | 0.8109 | 0.8225 | 0.8243 | 18.6 |
| No log | 10.0 | 50 | 0.0072 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 11.0 | 55 | 0.0119 | 0.8336 | 0.8217 | 0.8315 | 0.833 | 18.6 |
| No log | 12.0 | 60 | 0.0104 | 0.8336 | 0.8217 | 0.8315 | 0.833 | 18.6 |
| No log | 13.0 | 65 | 0.0031 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 14.0 | 70 | 0.0099 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 15.0 | 75 | 0.0067 | 0.8284 | 0.8053 | 0.8213 | 0.8226 | 18.6 |
| No log | 16.0 | 80 | 0.0019 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 17.0 | 85 | 0.0173 | 0.8143 | 0.798 | 0.8111 | 0.8102 | 18.2 |
| No log | 18.0 | 90 | 0.0007 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 19.0 | 95 | 0.0004 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 20.0 | 100 | 0.0195 | 0.8325 | 0.813 | 0.8294 | 0.8312 | 18.6 |
| No log | 21.0 | 105 | 0.0057 | 0.8325 | 0.813 | 0.8294 | 0.8312 | 18.6 |
| No log | 22.0 | 110 | 0.0005 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 23.0 | 115 | 0.0010 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 24.0 | 120 | 0.0003 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 25.0 | 125 | 0.0004 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 26.0 | 130 | 0.0005 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 27.0 | 135 | 0.0002 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 28.0 | 140 | 0.0001 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 29.0 | 145 | 0.0010 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 30.0 | 150 | 0.0003 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 31.0 | 155 | 0.0001 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 32.0 | 160 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 33.0 | 165 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 34.0 | 170 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 35.0 | 175 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 36.0 | 180 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 37.0 | 185 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 38.0 | 190 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 39.0 | 195 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
| No log | 40.0 | 200 | 0.0000 | 0.8325 | 0.8187 | 0.8294 | 0.8312 | 18.6 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
sujantkumarkv/legalpilot-7b-india-v2
|
sujantkumarkv
| 2023-09-20T10:50:31Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-09-20T10:50:31Z |
---
license: cc-by-nc-sa-4.0
---
|
dim/llama2_7b_dolly_oasst1_chip2
|
dim
| 2023-09-20T10:39:00Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-20T10:21:13Z |
---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/llama2_7b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=1,
bot_token_id=9225,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
ShaunThayil/distilbert-training-1
|
ShaunThayil
| 2023-09-20T10:24:28Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-20T10:23:55Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-training-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-training-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0337
- Accuracy: 0.9940
- Precision: 1.0
- Recall: 0.9875
- F1: 0.9937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 0.25 | 85 | 0.0878 | 0.9819 | 0.9787 | 0.9840 | 0.9813 |
| No log | 0.5 | 170 | 0.0763 | 0.9819 | 1.0 | 0.9626 | 0.9809 |
| No log | 0.75 | 255 | 0.0487 | 0.9880 | 0.9841 | 0.9911 | 0.9876 |
| 0.088 | 1.0 | 340 | 0.0411 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.088 | 1.25 | 425 | 0.0417 | 0.9914 | 0.9964 | 0.9857 | 0.9910 |
| 0.088 | 1.5 | 510 | 0.0423 | 0.9923 | 0.9946 | 0.9893 | 0.9920 |
| 0.088 | 1.76 | 595 | 0.0404 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0325 | 2.01 | 680 | 0.0459 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0325 | 2.26 | 765 | 0.0336 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.0325 | 2.51 | 850 | 0.0358 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0325 | 2.76 | 935 | 0.0413 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0236 | 3.01 | 1020 | 0.0423 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0236 | 3.26 | 1105 | 0.0399 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.0236 | 3.51 | 1190 | 0.0380 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.0236 | 3.76 | 1275 | 0.0357 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.0222 | 4.01 | 1360 | 0.0364 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.0222 | 4.26 | 1445 | 0.0351 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.0222 | 4.51 | 1530 | 0.0329 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.0222 | 4.76 | 1615 | 0.0337 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230913+cu121
- Tokenizers 0.13.3
|
languageresearch/wav2vec2-large-robust-6-ft-age-gender-finetuned-gtzan
|
languageresearch
| 2023-09-20T10:21:53Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:SLIdataset",
"base_model:audeering/wav2vec2-large-robust-6-ft-age-gender",
"base_model:finetune:audeering/wav2vec2-large-robust-6-ft-age-gender",
"license:cc-by-nc-sa-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-20T09:55:56Z |
---
license: cc-by-nc-sa-4.0
base_model: audeering/wav2vec2-large-robust-6-ft-age-gender
tags:
- generated_from_trainer
datasets:
- SLIdataset
metrics:
- accuracy
model-index:
- name: wav2vec2-large-robust-6-ft-age-gender-finetuned-dataset
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: privateSLI
type: SLIdataset
metrics:
- name: Accuracy
type: accuracy
value: 0.9832041343669251
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-robust-6-ft-age-gender-finetuned-dataset
This model is a fine-tuned version of [audeering/wav2vec2-large-robust-6-ft-age-gender](https://huggingface.co/audeering/wav2vec2-large-robust-6-ft-age-gender) on the privateSLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0890
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4327 | 1.0 | 289 | 0.2571 | 0.9134 |
| 0.6169 | 2.0 | 578 | 0.1431 | 0.9638 |
| 0.2803 | 3.0 | 867 | 0.2276 | 0.9561 |
| 0.1813 | 4.0 | 1156 | 0.1077 | 0.9780 |
| 0.0785 | 5.0 | 1445 | 0.0764 | 0.9832 |
| 0.0369 | 6.0 | 1734 | 0.0835 | 0.9832 |
| 0.1594 | 7.0 | 2023 | 0.0756 | 0.9845 |
| 0.129 | 8.0 | 2312 | 0.0761 | 0.9884 |
| 0.1519 | 9.0 | 2601 | 0.0682 | 0.9871 |
| 0.1179 | 10.0 | 2890 | 0.0890 | 0.9832 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Davlan/afro-xlmr-large-75L
|
Davlan
| 2023-09-20T10:17:17Z | 115 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"en",
"am",
"ar",
"so",
"sw",
"pt",
"af",
"fr",
"zu",
"mg",
"ha",
"sn",
"arz",
"ny",
"ig",
"xh",
"yo",
"st",
"rw",
"tn",
"ti",
"ts",
"om",
"run",
"nso",
"ee",
"ln",
"tw",
"pcm",
"gaa",
"loz",
"lg",
"guw",
"bem",
"efi",
"lue",
"lua",
"toi",
"ve",
"tum",
"tll",
"iso",
"kqn",
"zne",
"umb",
"mos",
"tiv",
"lu",
"ff",
"kwy",
"bci",
"rnd",
"luo",
"wal",
"ss",
"lun",
"wo",
"nyk",
"kj",
"ki",
"fon",
"bm",
"cjk",
"din",
"dyu",
"kab",
"kam",
"kbp",
"kr",
"kmb",
"kg",
"nus",
"sg",
"taq",
"tzm",
"arxiv:2309.07445",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-07T11:30:20Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-75L
results: []
language:
- en
- am
- ar
- so
- sw
- pt
- af
- fr
- zu
- mg
- ha
- sn
- arz
- ny
- ig
- xh
- yo
- st
- rw
- tn
- ti
- ts
- om
- run
- nso
- ee
- ln
- tw
- pcm
- gaa
- loz
- lg
- guw
- bem
- efi
- lue
- lua
- toi
- ve
- tum
- tll
- iso
- kqn
- zne
- umb
- mos
- tiv
- lu
- ff
- kwy
- bci
- rnd
- luo
- wal
- ss
- lun
- wo
- nyk
- kj
- ki
- fon
- bm
- cjk
- din
- dyu
- kab
- kam
- kbp
- kr
- kmb
- kg
- nus
- sg
- taq
- tzm
---
# afro-xlmr-large-75L
AfroXLMR-large was created by MLM adaptation of XLM-R-large model on 75 languages widely spoken in Africa
including 4 high-resource languages.
### Pre-training corpus
A mix of mC4, Wikipedia and OPUS data
### Languages
There are 75 languages available :
- English (eng)
- Amharic (amh)
- Arabic (ara)
- Somali (som)
- Kiswahili (swa)
- Portuguese (por)
- Afrikaans (afr)
- French (fra)
- isiZulu (zul)
- Malagasy (mlg)
- Hausa (hau)
- chiShona (sna)
- Egyptian Arabic (arz)
- Chichewa (nya)
- Igbo (ibo)
- isiXhosa (xho)
- Yorùbá (yor)
- Sesotho (sot)
- Kinyarwanda (kin)
- Tigrinya (tir)
- Tsonga (tso)
- Oromo (orm)
- Rundi (run)
- Northern Sotho (nso)
- Ewe (ewe)
- Lingala (lin)
- Twi (twi)
- Nigerian Pidgin (pcm)
- Ga (gaa)
- Lozi (loz)
- Luganda (lug)
- Gun (guw)
- Bemba (bem)
- Efik (efi)
- Luvale (lue)
- Luba-Lulua (lua)
- Tonga (toi)
- Tshivenḓa (ven)
- Tumbuka (tum)
- Tetela (tll)
- Isoko (iso)
- Kaonde (kqn)
- Zande (zne)
- Umbundu (umb)
- Mossi (mos)
- Tiv (tiv)
- Luba-Katanga (lub)
- Fula (fuv)
- San Salvador Kongo (kwy)
- Baoulé (bci)
- Ruund (rnd)
- Luo (luo)
- Wolaitta (wal)
- Swazi (ssw)
- Lunda (lun)
- Wolof (wol)
- Nyaneka (nyk)
- Kwanyama (kua)
- Kikuyu (kik)
- Fon (fon)
- Bambara (bam)
- Chokwe (cjk)
- Dinka (dik)
- Dyula (dyu)
- Kabyle (kab)
- Kamba (kam)
- Kabiyè (kbp)
- Kanuri (knc)
- Kimbundu (kmb)
- Kikongo (kon)
- Nuer (nus)
- Sango (sag)
- Tamasheq (taq)
- Tamazight (tzm)
### Acknowledgment
Model trained by [@Jesujoba](https://ajesujoba.github.io/)
### BibTeX entry and citation info.
```
@misc{adelani2023sib200,
title={SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects},
author={David Ifeoluwa Adelani and Hannah Liu and Xiaoyu Shen and Nikita Vassilyev and Jesujoba O. Alabi and Yanke Mao and Haonan Gao and Annie En-Shiun Lee},
year={2023},
eprint={2309.07445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CyberHarem/murakami_tomoe_idolmastercinderellagirls
|
CyberHarem
| 2023-09-20T10:17:14Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/murakami_tomoe_idolmastercinderellagirls",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-20T10:02:22Z |
---
license: mit
datasets:
- CyberHarem/murakami_tomoe_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of murakami_tomoe_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3060, you need to download `3060/murakami_tomoe_idolmastercinderellagirls.pt` as the embedding and `3060/murakami_tomoe_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3060**, with the score of 0.942. The trigger words are:
1. `murakami_tomoe_idolmastercinderellagirls`
2. `red_hair, short_hair, blush, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.895 | [Download](5100/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/pattern_8.png) | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.920 | [Download](4760/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/pattern_8.png) | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.869 | [Download](4420/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/pattern_8.png) | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.932 | [Download](4080/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/pattern_8.png) | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.909 | [Download](3740/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/pattern_8.png) | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.915 | [Download](3400/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/pattern_8.png) | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| **3060** | **0.942** | [**Download**](3060/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/pattern_8.png) | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.883 | [Download](2720/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/pattern_8.png) | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.915 | [Download](2380/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/pattern_8.png) | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.905 | [Download](2040/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/pattern_8.png) | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.887 | [Download](1700/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/pattern_8.png) | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.885 | [Download](1360/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/pattern_8.png) | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.860 | [Download](1020/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/pattern_8.png) | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.756 | [Download](680/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/pattern_8.png) | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.731 | [Download](340/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/pattern_8.png) | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Carve/isnet
|
Carve
| 2023-09-20T10:17:12Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-01-29T22:51:13Z |
---
license: apache-2.0
---
`isnet.pth` - ISNet model. Trained on DIS v1.0 dataset.
`isnet-97-carveset.pth` - ISNet model. Trained on CarveSet. Achieves 98% on test set.
|
Davlan/afro-xlmr-large-61L
|
Davlan
| 2023-09-20T10:16:59Z | 135 | 7 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"en",
"am",
"ar",
"so",
"sw",
"pt",
"af",
"fr",
"zu",
"mg",
"ha",
"sn",
"arz",
"ny",
"ig",
"xh",
"yo",
"st",
"rw",
"tn",
"ti",
"ts",
"om",
"run",
"nso",
"ee",
"ln",
"tw",
"pcm",
"gaa",
"loz",
"lg",
"guw",
"bem",
"efi",
"lue",
"lua",
"toi",
"ve",
"tum",
"tll",
"iso",
"kqn",
"zne",
"umb",
"mos",
"tiv",
"lu",
"ff",
"kwy",
"bci",
"rnd",
"luo",
"wal",
"ss",
"lun",
"wo",
"nyk",
"kj",
"ki",
"fon",
"arxiv:2309.07445",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T22:23:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-61L
results: []
language:
- en
- am
- ar
- so
- sw
- pt
- af
- fr
- zu
- mg
- ha
- sn
- arz
- ny
- ig
- xh
- yo
- st
- rw
- tn
- ti
- ts
- om
- run
- nso
- ee
- ln
- tw
- pcm
- gaa
- loz
- lg
- guw
- bem
- efi
- lue
- lua
- toi
- ve
- tum
- tll
- iso
- kqn
- zne
- umb
- mos
- tiv
- lu
- ff
- kwy
- bci
- rnd
- luo
- wal
- ss
- lun
- wo
- nyk
- kj
- ki
- fon
---
# afro-xlmr-large-61L
AfroXLMR-large was created by MLM adaptation of XLM-R-large model on 61 languages widely spoken in Africa
including 4 high-resource languages.
### Pre-training corpus
A mix of mC4, Wikipedia and OPUS data
### Languages
There are 61 languages available :
- English (eng)
- Amharic (amh)
- Arabic (ara)
- Somali (som)
- Kiswahili (swa)
- Portuguese (por)
- Afrikaans (afr)
- French (fra)
- isiZulu (zul)
- Malagasy (mlg)
- Hausa (hau)
- chiShona (sna)
- Egyptian Arabic (arz)
- Chichewa (nya)
- Igbo (ibo)
- isiXhosa (xho)
- Yorùbá (yor)
- Sesotho (sot)
- Kinyarwanda (kin)
- Tigrinya (tir)
- Tsonga (tso)
- Oromo (orm)
- Rundi (run)
- Northern Sotho (nso)
- Ewe (ewe)
- Lingala (lin)
- Twi (twi)
- Nigerian Pidgin (pcm)
- Ga (gaa)
- Lozi (loz)
- Luganda (lug)
- Gun (guw)
- Bemba (bem)
- Efik (efi)
- Luvale (lue)
- Luba-Lulua (lua)
- Tonga (toi)
- Tshivenḓa (ven)
- Tumbuka (tum)
- Tetela (tll)
- Isoko (iso)
- Kaonde (kqn)
- Zande (zne)
- Umbundu (umb)
- Mossi (mos)
- Tiv (tiv)
- Luba-Katanga (lub)
- Fula (fuv)
- San Salvador Kongo (kwy)
- Baoulé (bci)
- Ruund (rnd)
- Luo (luo)
- Wolaitta (wal)
- Swazi (ssw)
- Lunda (lun)
- Wolof (wol)
- Nyaneka (nyk)
- Kwanyama (kua)
- Kikuyu (kik)
- Fon (fon)
### Acknowledgment
We would like to thank Google Cloud for providing us access to TPU v3-8 through the free cloud credits. Model trained using flax, before converted to pytorch.
### BibTeX entry and citation info.
```
@misc{adelani2023sib200,
title={SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects},
author={David Ifeoluwa Adelani and Hannah Liu and Xiaoyu Shen and Nikita Vassilyev and Jesujoba O. Alabi and Yanke Mao and Haonan Gao and Annie En-Shiun Lee},
year={2023},
eprint={2309.07445},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
firelzrd/Xwin-LM-70B-V0.1-GGUF
|
firelzrd
| 2023-09-20T10:12:06Z | 1 | 2 | null |
[
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-09-19T22:16:37Z |
---
license: llama2
---
# Xwin-LM 70B V0.1 - GGUF
- Original model: [Xwin-LM 70B V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1)
- Original GitHub [Xwin-LM](https://github.com/Xwin-LM/Xwin-LM)
|
zhuwch/all-MiniLM-L6-v2
|
zhuwch
| 2023-09-20T10:07:25Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-09-20T07:37:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
Asheyy/dogbooth
|
Asheyy
| 2023-09-20T10:01:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-17T09:20:41Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Asheyy/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
swaroopajit/git-base-fashion
|
swaroopajit
| 2023-09-20T09:58:46Z | 170 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-09-01T08:19:38Z |
---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-fashion
results: []
pipeline_tag: image-to-text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-fashion
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
pankaj-munde/eli5-clm-model
|
pankaj-munde
| 2023-09-20T09:56:42Z | 200 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-20T09:42:16Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: eli5-clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eli5-clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.878 | 1.0 | 788 | 3.7036 |
| 3.7479 | 2.0 | 1576 | 3.6890 |
| 3.71 | 3.0 | 2364 | 3.6860 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Technotech/sd-prompt-instruct-3b-epoch-0.4-lora
|
Technotech
| 2023-09-20T09:54:02Z | 2 | 0 |
peft
|
[
"peft",
"stable-diffusion",
"instruct",
"magic-prompt",
"en",
"dataset:Technotech/sd-prompt-instruct",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"region:us"
] | null | 2023-07-29T15:14:28Z |
---
language:
- en
license: apache-2.0
library_name: peft
tags:
- stable-diffusion
- instruct
- magic-prompt
datasets:
- Technotech/sd-prompt-instruct
base_model: openlm-research/open_llama_3b_v2
---
# Stable Diffusion Prompt Instruct 3B (OpenLlama v2 3B)
Trained for 0.4 epochs (test) on [Technotech/sd-prompt-instruct](https://huggingface.co/datasets/Technotech/sd-prompt-instruct).
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
SHENMU007/neunit_BASE_V9.5.14
|
SHENMU007
| 2023-09-20T09:53:21Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-09-20T08:19:51Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Boqianshen/llama-2-7b-miniguanaco
|
Boqianshen
| 2023-09-20T09:42:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:mlabonne/guanaco-llama2-1k",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-20T09:29:19Z |
---
language:
- en
license: llama2
datasets:
- mlabonne/guanaco-llama2-1k
metrics:
- accuracy
pipeline_tag: conversational
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mitch7w/falcon-7b-midjourney-example
|
mitch7w
| 2023-09-20T09:41:16Z | 0 | 0 |
peft
|
[
"peft",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"license:openrail",
"region:us"
] |
text-generation
| 2023-09-20T07:26:15Z |
---
license: openrail
library_name: peft
pipeline_tag: text-generation
datasets:
- tiiuae/falcon-refinedweb
model_parameters:
base_model_name_or_path: vilsonrodrigues/falcon-7b-instruct-sharded
bias: none
fan_in_fan_out: false
inference_mode: true
init_lora_weights: true
lora_alpha: 32
lora_dropout: 0.05
peft_type: LORA
r: 16
target_modules:
- query_key_value
task_type: CAUSAL_LM
---
|
hosnasn/hosna_DB
|
hosnasn
| 2023-09-20T09:32:39Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-20T09:32:37Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks hosna
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.